A Supreme Court just replaced its news anchors... with AI?
- Replies 0
They stand behind a sleek desk, confidently delivering the latest news from the state’s highest court.
They blink, they smile, and they gesture like seasoned anchors. But they’re not who they seem.
And the story behind their presence is changing the way justice is communicated.
If you clicked on a recent ruling from the Arizona Supreme Court, you didn’t get a press release.
You got a video—hosted by two well-dressed, well-spoken, uncannily lifelike “reporters.”
Their names are Daniel and Victoria. They have no pulse, no past, and no paycheck.
That’s because they’re AI avatars. And they just became the new digital faces of Arizona’s highest court.

The avatars were designed by the court’s communications director, Alberto Rodriguez, who says the goal is simple: deliver court news faster, clearer, and in a format people will actually watch.
“It’s really an opportunity for us to meet the public where they’re consuming their media,” Rodriguez said.
What used to take up to six hours—writing, filming, editing—is now done in minutes.
But this isn’t AI freelancing from the bench. Every word Daniel and Victoria say is written by the justices themselves.
Also read: AI is giving scammers a dangerous new edge—especially in grandparent hoaxes
No generative models, no algorithmic summaries, no ChatGPT scripting your legal rights.
Chief Justice Ann Timmer says the experiment isn’t about flash—it’s about trust.
“For years, we took it for granted that, ‘Of course you trust the courts, of course you trust judges,’” she said. “But if people don’t believe that, it doesn’t matter.”
The avatars are designed to demystify the legal system—breaking down rulings in language that’s easier to understand and access.
And while Daniel and Victoria are new to the scene, AI isn’t. Lawyers already use AI tools for research and document review.
Courts use it to analyze data. The avatars are just the most visible step forward—and perhaps the most controversial.
Around the country, things are getting messier. In New York, a man tried to argue his case using an AI “lawyer.” The court said no.
In California, the bar exam quietly used AI to help write test questions—until it got called out.
And in several cases, AI tools hallucinated fake legal precedents in filings.
Also read: This messaging application's new AI button has users frustrated—here’s what to know
Chief Justice Timmer says Arizona won’t go there. “This AI, at least that we’re using, is not generative,” she emphasized.
If the public is concerned “that we’ll use AI to start substituting for judgment, I don’t think that will ever happen”
Whether you’re following an arson appeal or just curious about the future of tech in justice, Daniel and Victoria may be just the beginning.
AI anchors can speak multiple languages. They don’t get tired, don’t editorialize, and don’t take holidays. They could transform how government institutions communicate.
But some questions linger: Who’s responsible if they get something wrong?
Will we tune out when we know the voice isn’t real?
And does transparency mean anything without human empathy behind it?
Read next: Robot goes haywire—raising new questions about AI safety and control
Are Daniel and Victoria a smart step forward—or a slippery slope? Would you trust an AI to explain your rights? Would you trust one not to? We want to hear from you. Drop your take in the comments. Let's talk about justice, journalism, and the uncanny future knocking on the courthouse door.
They blink, they smile, and they gesture like seasoned anchors. But they’re not who they seem.
And the story behind their presence is changing the way justice is communicated.
If you clicked on a recent ruling from the Arizona Supreme Court, you didn’t get a press release.
You got a video—hosted by two well-dressed, well-spoken, uncannily lifelike “reporters.”
Their names are Daniel and Victoria. They have no pulse, no past, and no paycheck.
That’s because they’re AI avatars. And they just became the new digital faces of Arizona’s highest court.

They stand behind a sleek desk, confidently delivering the latest news from the state’s highest court. Image source: Igor Omilaev / Unsplash
The avatars were designed by the court’s communications director, Alberto Rodriguez, who says the goal is simple: deliver court news faster, clearer, and in a format people will actually watch.
“It’s really an opportunity for us to meet the public where they’re consuming their media,” Rodriguez said.
What used to take up to six hours—writing, filming, editing—is now done in minutes.
But this isn’t AI freelancing from the bench. Every word Daniel and Victoria say is written by the justices themselves.
Also read: AI is giving scammers a dangerous new edge—especially in grandparent hoaxes
No generative models, no algorithmic summaries, no ChatGPT scripting your legal rights.
Chief Justice Ann Timmer says the experiment isn’t about flash—it’s about trust.
“For years, we took it for granted that, ‘Of course you trust the courts, of course you trust judges,’” she said. “But if people don’t believe that, it doesn’t matter.”
The avatars are designed to demystify the legal system—breaking down rulings in language that’s easier to understand and access.
And while Daniel and Victoria are new to the scene, AI isn’t. Lawyers already use AI tools for research and document review.
Courts use it to analyze data. The avatars are just the most visible step forward—and perhaps the most controversial.
Around the country, things are getting messier. In New York, a man tried to argue his case using an AI “lawyer.” The court said no.
In California, the bar exam quietly used AI to help write test questions—until it got called out.
And in several cases, AI tools hallucinated fake legal precedents in filings.
Also read: This messaging application's new AI button has users frustrated—here’s what to know
Chief Justice Timmer says Arizona won’t go there. “This AI, at least that we’re using, is not generative,” she emphasized.
If the public is concerned “that we’ll use AI to start substituting for judgment, I don’t think that will ever happen”
Whether you’re following an arson appeal or just curious about the future of tech in justice, Daniel and Victoria may be just the beginning.
AI anchors can speak multiple languages. They don’t get tired, don’t editorialize, and don’t take holidays. They could transform how government institutions communicate.
But some questions linger: Who’s responsible if they get something wrong?
Will we tune out when we know the voice isn’t real?
And does transparency mean anything without human empathy behind it?
Read next: Robot goes haywire—raising new questions about AI safety and control
Key Takeaways
- The Arizona Supreme Court is the first in the US to use AI avatars—named Daniel and Victoria—to deliver news about its rulings.
- The initiative reduces video production time from hours to minutes, helping the public get legal updates more quickly.
- Officials say the AI used is not generative, meaning it doesn’t make independent decisions or interpretations.
- Legal professionals nationwide already use AI for research and paperwork—but concerns over AI hallucinations and accuracy persist.
Last edited: