← Models

DistilBART news summarizer

Hugging FaceJanuary 05, 2026Sachin21112004/distilbart-news-summarizerView on Hugging Face
DistilBART news summarizer thumbnail

This model is a lightweight, “batteries-included” option for abstractive news summarization. It’s built on top of the distilled BART CNN/DM checkpoint (sshleifer/distilbart-cnn-12-6) and is intended for the common “take a long-ish article and produce a short paragraph summary” workflow.

The card includes a simple Transformers recipe: load the tokenizer + AutoModelForSeq2SeqLM, truncate inputs to the model’s max context window, and generate with a small beam search. If you’re building a news reader, research assistant, or internal “summarize this link” tool, this kind of smaller seq2seq model can be a nice fit when you want predictable latency and don’t need the cost/complexity of running a much larger instruction-tuned LLM.

View on Hugging Face

Source listing: https://huggingface.co/models?sort=modified