Havelock Orality Analyzer (oral vs literate classifier)
thestalwart/havelock-orality is a niche-but-fun text classifier that tries to quantify how “oral” a piece of writing feels (think speeches, sermons, epic poetry) versus “literate” (think academic prose or legal text). It frames the task using the orality/literacy taxonomy popularized by Walter Ong, and the model card is unusually concrete about what’s shipped: a document-level regressor that outputs a 0–1 score, plus two marker classifiers for span-level labeling (oral vs literate markers, and a more detailed subtype taxonomy).
If you’re building writing analytics, rhetorical style tools, or just want a dataset-backed example of “cultural theory → ML task,” this is a solid artifact. The README also includes simple reference thresholds (e.g. what 0.9+ means) and a short description of the training data volume, which makes it easier to reason about where it’s likely to work well (short, strongly-styled text) versus where it might be noisy (technical docs, code-heavy content, or multilingual inputs).
Quick stats from the listing feed: pipeline: text-classification · 1 like.
Source listing: https://huggingface.co/models?sort=modified