DecodeTheVote.com
Application Development, Branding and Marketing.
Voting should be the easiest civic act there is. You show up, you choose, you leave. But for most Americans, the part before showing up — actually figuring out who and what is on your ballot, what those candidates have actually done, what that ballot measure actually means — has become an exhausting exercise in navigating partisan spin, confusing legalese, and an internet full of content designed to persuade rather than inform. DecodetheVote.com was built to fix that. It is a free, nonpartisan voter intelligence platform that pulls real data from government sources, runs it through two independent AI models that check each other’s work, and delivers plain-English candidate analysis, transparent scoring, and a personalized ballot guide to any voter in the country — no agenda, no recommendation, no partisan filter. Think of it as the voter guide that should have always existed: one that starts with the facts, shows its work, and trusts you to make up your own mind.
DecodetheVote currently covers federal elections and state-level races, with select local elections and ballot initiatives, across all 50 states. The platform is at v1.11.2. Full local coverage down to city council, school board, judicial, and ballot measure level is the stated development goal, with November 2026 as the operational deadline. The voter-facing feature set includes: a ballot lookup tool that takes a street address or ZIP code and returns the voter's specific ballot with every race and measure on it; candidate profiles with a Decode Grade and full sourced analysis; a My Priorities tool where users rank their positions on 10 standard policy issues and receive an alignment percentage against each candidate's verified record (no recommendation is ever made); current officeholder records and legislative history; a live bills tracker following active legislation; live FEC campaign finance data; election results tracking; voter registration eligibility check; polling place finder; personalized downloadable voter guide PDF combining ballot races, alignment scores, election dates, and registration links; a built-in polling mechanism capturing real-time voter sentiment on candidates and races; curated trusted podcast recommendations spanning multiple ideological perspectives (The Ezra Klein Show, NPR Politics, PBS NewsHour, The Dispatch, FiveThirtyEight, The Argument, Politico Playbook Deep Dive); a personalized election news briefing; and a newsletter available at daily, weekly, or monthly frequency.
How the methodology works
Every candidate evaluation begins with public data enrichment: FEC filings, official voting records, ballot text, and campaign finance disclosures are loaded from authoritative government APIs before any AI runs. No speculation, no scraped web content, no anonymous sourcing.
Two independent AI models then analyze the same enriched dataset. One handles policy interpretation; the other handles accountability analysis. They receive identical, strictly nonpartisan prompts that ask for concrete facts, specific bills and donors, and steelmanned critiques from opponents. Neither model sees the other’s output. A third synthesis pass reconciles the two, and where they disagree, the disagreement is shown rather than hidden. The models are prompt personas named “Avery” (policy) and “Mara” (accountability), which allows users to see which analyst produced which claim and detect bias at the persona level, not just the model level. The AI is not dependent on any single provider; the architecture is designed so any model can be substituted or added without structural changes.
The Decode Grade is computed from five verifiable components: transparency, track record, policy specificity, accessibility, and executive effectiveness. Each is scored from observable public records only. Weights differ by office type. Two candidates with identical records receive identical grades. An insufficient-data gate prevents grading where source material is too thin to be reliable. The full methodology is publicly published, versioned, and rendered from the same source as the live site, so the published document always reflects exactly what the production system uses. Version 2.0 was released April 2026; version 1.0 launched June 2025.
The data asset
Every voter interaction generates structured first-party civic data: issue priority rankings, alignment patterns by geography and demographic, polling responses on specific candidates and races, newsletter engagement by state. Over time this builds voter personas and a clean political data layer that has significant value for academic research, civic organizations, media, and ethical commercial applications in the political intelligence market. This is the long-term asset the platform is accumulating while delivering a public service.
What is still in development
Full local coverage (city council, school board, judicial races, water districts, local ballot measures) is the primary development priority. The “Request an Election” feature on the elections page allows users to flag races not yet covered, directly informing the build queue. Expanded demographic data integration, deeper state legislative coverage, additional language support beyond English, and a public-facing API for civic organizations and local news outlets are on the roadmap. The donation and support infrastructure is live, indicating a path toward community-funded sustainability alongside grant and partnership revenue.
Beyond its technical architecture, the platform’s innovation lies in its vision of collective emotional awareness. In an age of information overload where individuals struggle to contextualize their own anxiety against the backdrop of world events, Mood of the World provides perspective—am I feeling stressed because the world is stressed, or is this personal? The historical archive, predictive forecasting, and regional breakdowns transform ephemeral sentiment into analyzable data, opening possibilities for researchers studying collective behavior, journalists seeking emotional context for their reporting, and everyday users simply curious whether their feelings reflect a broader human experience. It is, in essence, the world’s first emotional weather service—not predicting rain or sunshine, but tracking the storms and calm of human consciousness at planetary scale.