
404news
Community
Explore More
Discover more amazing content from 404news and connect with the community.

Discover more amazing content from 404news and connect with the community.

While there are no major reports of widespread election fraud directly impacting recent Minnesota election outcomes as of late 2025, several incidents and policy debates have drawn attention, particularly amid broader fraud scandals in state social services programs. Key Recent Cases of Detected and Prosecuted Voter Fraud 2025 Voter Registration Fraud Scheme: Two Nevada residents (formerly of Minnesota) were charged federally with conspiracy to submit hundreds of fraudulent voter registration applications across multiple counties in 2021â2022. One pleaded guilty in July 2025. The scheme was uncovered by local election officials (starting in Carver County), and no fraudulent ballots were cast or counted. Minnesota Secretary of State Steve Simon emphasized that this case demonstrates the effectiveness of the state's safeguards, as fraudulent applications were flagged immediately. Individual Incidents: Isolated cases include a woman sentenced in October 2025 for attempting to cast an absentee ballot (flagged and not counted), and older cases like an election judge charged in 2024 for improperly allowing unregistered voters. These cases are rare and were prevented from affecting results, according to official statements and reports from sources like the Associated Press and the Minnesota Secretary of State's office. Policy Scrutiny Amid Broader Fraud Concerns A December 29, 2025, Fox News article highlighted Minnesota's long-standing "vouching" policy, which allows a registered voter to vouch for the residency of up to eight others on same-day registration (no ID required for the vouched voters in some cases). Critics, including conservatives like Scott Presler and Sen. Mike Lee, argue this creates potential for abuse, especially given same-day registration and recent social services fraud scandals involving Minnesota's Somali community (e.g., hundreds of millions in alleged misuse of federal funds). However, state officials note the policy has existed for over 50 years with no evidence of systemic exploitation leading to fraudulent votes. This debate has intensified due to unrelated but high-profile welfare and childcare fraud investigations (e.g., Feeding Our Future scandal), which some Republicans link to election integrity concerns, though no direct connection to voting has been substantiated. Historical Context Older allegations, such as 2020 ballot harvesting claims tied to Rep. Ilhan Omar (promoted by Project Veritas), were largely debunked or lacked corroboration, with sources retracting statements. In summary, proven voter fraud in Minnesota remains extremely limited and detected/prevented by existing systems. Widespread claims often stem from policy critiques or conflation with non-election fraud scandals.

Simply ask Photos to make the edits you want and watch the changes appear. Plus, weâre making it easier to see if an image was edited using AI with C2PA Content Credentials. S Selena Shang Senior Product Manager, Google Photos Read AI-generated summary Share Weâre making it unbelievably easy to quickly edit your images in Google Photos â just ask Photos to edit your pictures for you. Coming first to Pixel 10 in the U.S., you can simply describe the edits you want to make by text or voice in Photosâ editor, and watch the changes appear. And to further improve transparency around AI edits, weâre adding support for C2PA Content Credentials in Google Photos. Edit by simply asking Our recently redesigned photo editor already makes editing quick and easy for anyone â regardless of your editing expertise â by providing AI-powered suggestions that combine multiple effects for quick edits and putting all our powerful editing tools in one place. You can also simply tap or circle parts of an image right when you open the editor and get suggestions for editing that specific area, like erasing a distraction. Today, weâre introducing conversational editing capabilities in the redesigned photo editor, so youâll have more ways to make stunning edits, including simple gestures, one-tap suggestions and now, natural language. Thanks to advanced Gemini capabilities, Photos can now help you make custom AI-powered edits that bring your vision to life in just seconds. No need to select tools or adjust sliders. All you have to do is ask Photos for the edits you want to see. Because this is an open-ended, conversational experience, you donât have to indicate which tools you want to use. For example, you could ask for a specific edit, like âremove the cars in the backgroundâ or something more general like ârestore this old photoâ and Photos will understand the changes youâre trying to make. You can even make multiple requests in a single prompt like âremove the reflections and fix the washed out colors.â And if you truly have no idea where to start, you can just start by typing or saying, âmake it betterâ or using one of the provided suggestions. Then if you want to make tweaks, you can add follow-up instructions after each edit to fine-tune your image and get it looking just right. Beyond corrective edits like lighting and removing distractions, you can ask for more creative help. For example, you could change the background of your image, add fun items like a party hat or sunglasses to the main subject and so much more. Without having to worry about choosing which tools to use and how theyâll work together, the possibilities are wide open when it comes to editing â all you have to do is tell Photos what you want to see, from simple tweaks to complex edits. See how your images were made for added transparency Pixel 10 devices will be the first to implement industry-standard C2PA Content Credentials within the native camera app, across photos created by Pixel Camera, with and without AI. To further improve transparency around how images are made, weâre adding support for C2PA Content Credentials in Google Photos â in addition to the existing support for IPTC metadata for AI-edited images and SynthID for images edited with Reimagine. Image showing how C2PA Content Credentials are shown in the Google Photos app. Available first for Pixel 10 â and rolling out gradually on Android and iOS devices over the coming weeks â youâll now be able to see information right in Google Photos indicating how an image was captured or edited based on C2PA Content Credentials. Gemini models support so many creative and useful features in Google Photos â from search to editing. Weâll continue to explore how to continue to use them to bring you new, helpful ways to use the app.
Most people not deeply involved in the artificial intelligence frenzy may not have noticed, but perceptions of AIâs relentless march toward becoming more intelligent than humans, even becoming a threat to humanity, came to a screeching halt Aug. 7. That was the day when the most widely followed AI company, OpenAI, released GPT-5, an advanced product that the firm had long promised would put competitors to shame and launch a new revolution in this purportedly revolutionary technology. As it happened, GPT-5 was a bust. It turned out to be less user-friendly and in many ways less capable than its predecessors in OpenAIâs arsenal. It made the same sort of risible errors in answering usersâ prompts, was no better in math (or even worse), and not at all the advance that OpenAI and its chief executive, Sam Altman, had been talking up. Advertisement AI companies are really buoying the American economy right now, and itâs looking very bubble-shaped. â Alex Hanna, co-author, âThe AI Conâ âThe thought was that this growth would be exponential,â says Alex Hanna, a technology critic and co-author (with Emily M. Bender of the University of Washington) of the indispensable new book âThe AI Con: How to Fight Big Techâs Hype and Create the Future We Want.â âInstead, Hanna says, âWeâre hitting a wall.â The consequences go beyond how so many business leaders and ordinary Americans have been led to expect, even fear, the penetration of AI into our lives. Hundreds of billions of dollars have been invested by venture capitalists and major corporations such as Google, Amazon and Microsoft in OpenAI and its multitude of fellow AI labs, even though none of the AI labs has turned a profit. Advertisement Newsletter Get the latest from Michael Hiltzik Commentary on economics and more from a Pulitzer Prize winner. Enter email address Enter email address Sign Me Up You may occasionally receive promotional content from the Los Angeles Times. Public companies have scurried to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices, much as an earlier generation of businesses promoted themselves as âdot-comsâ in the 1990s to look more glittery in investorsâ eyes. Nvidia, the maker of a high-powered chip powering AI research, plays almost the same role as a stock market leader that Intel Corp., another chip-maker, played in the 1990s â helping to prop up the bull market in equities. ADVERTISING If the promise of AI turns out to be as much of a mirage as dot-coms did, stock investors may face a painful reckoning. File - Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. The board of ChatGPT-maker Open AI says it has pushed out Altman, its co-founder and CEO, and replaced him with an interim CEO(AP Photo/Eric Risberg, File) Voices Hiltzik: AI âhallucinationsâ are a growing problem for the legal profession May 22, 2025 The cheerless rollout of GPT-5 could bring the day of reckoning closer. âAI companies are really buoying the American economy right now, and itâs looking very bubble-shaped,â Hanna told me. The rollout was so disappointing that it shined a spotlight on the degree that the whole AI industry has been dependent on hype. Hereâs Altman, speaking just before the unveiling of GPT-5, comparing it with its immediate predecessor, GPT-4o: âGPT-4o maybe it was like talking to a college student,â he said. âWith GPT-5 now itâs like talking to an expert â a legitimate PhD-level expert in anything any area you need on demand ... whatever your goals are.â Well, not so much. When one user asked it to produce a map of the U.S. with all the states labeled, GPT-5 extruded a fantasyland, including states such as Tonnessee, Mississipo and West Wigina. Another prompted the model for a list of the first 12 presidents, with names and pictures. It only came up with nine, including presidents Gearge Washington, John Quincy Adama and Thomason Jefferson. Experienced users of the new versionâs predecessor models were appalled, not least by OpenAIâs decision to shut down access to its older versions and force users to rely on the new one. âGPT5 is horrible,â wrote a user on Reddit. âShort replies that are insufficient, more obnoxious ai stylized talking, less âpersonalityâ ⊠and we donât have the option to just use other models.â (OpenAI quickly relented, reopening access to the older versions.) The tech media was also unimpressed. âA bit of a dud,â judged the website Futurism and Ars Technica termed the rollout âa big mess.â I asked OpenAI to comment on the dismal public reaction to GPT-5, but didnât hear back. None of this means that the hype machine underpinning most public expectations of AI has taken a breather. Rather, it remains in overdrive. A projection of AIâs development over the coming years published by something called the AI Futures Project under the title âAI 2027â states: âWe predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.â FILE - Jensen Huang, chief executive officer of Nvidia, speaks at SIGGRAPH 2024, in the Colorado Convention Center on July 29, 2024, in Denver. (AP Photo/David Zalubowski, File) Voices Hiltzik: The air begins to leak out of the overinflated AI bubble Sept. 8, 2024 The rest of the document, mapping a course to late 2027 when an AI agent âfinally understands its own cognition,â is so loopily over the top that I wondered whether it wasnât meant as a parody of excessive AI hype. I asked its creators if that was so, but havenât received a reply. One problem underscored by GPT-5âs underwhelming rollout is that it exploded one of the most cherished principles of the AI world, which is that âscaling upâ â endowing the technology with more computing power and more data â would bring the grail of artificial general intelligence, or AGI, ever closer to reality. Thatâs the principle undergirding the AI industryâs vast expenditures on data centers and high-performance chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would outstrip the capacity of the global credit and derivative securities markets. But if AI wonât scale up, most if not all that money will be wasted. As Bender and Hanna point out in their book, AI promoters have kept investors and followers enthralled by relying on a vague public understanding of the term âintelligence.â AI bots seem intelligent, because theyâve achieved the ability to seem coherent in their use of language. But thatâs different from cognition. âSo weâre imagining a mind behind the words,â Hanna says, âand that becomes associated with consciousness or intelligence. But the notion of general intelligence is not really well-defined.â Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy. âWhat I had not realized,â Weizenbaum wrote in 1976, âis that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.â Weizenbaum warned that the âreckless anthropomorphization of the computerâ â that is, treating it as some sort of thinking companion â produced a âsimpleminded view of intelligence.â The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT. Voices Hiltzik: This AI chatbot was âtrainedâ using my books, but donât blame me for its incredible stupidity Oct. 5, 2023 That tendency has been exploited by todayâs AI promoters. They label the frequent mistakes and fabrications produced by AI bots as âhallucinations,â which suggests that the bots have perceptions that may have gone slightly awry. But the bots âdonât have perceptions,â Bender and Hanna write, âand suggesting that they do is yet more unhelpful anthropomorphization.â The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset. Predictions that AI would yield a burst of increased worker productivity havenât been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications â legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on. Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI campâs projections. The value of Benderâs and Hannaâs book, and the lesson of GPT-5, is that they remind us that âartificial intelligenceâ isnât a scientific term or an engineering term. Itâs a marketing term. And thatâs true of all the chatter about AI eventually taking over the world. âClaims around consciousness and sentience are a tactic to sell you on AI,â Bender and Hanna write. So, too, is the talk about the billions, or trillions, to be made in AI. As with any technology, the profits will go to a small cadre, while the rest of us pay the price ... unless we gain a much clearer perception of what AI is, and more importantly, what it isnât.

Spotify has launched its new Mix With Spotify feature, allowing users to seamlessly blend songs within a playlist like a professional DJ. Offering controls like echo, volume automation, EQ, and low and high pass filters, the new feature launches in beta on Tuesday (Aug. 19) to Spotify Premium users on the app. Related DJ Fan Remixing Is the Latest Music Tech Trend â Could Licensing Stop Its Growth? Kristin Robinson News of Spotifyâs new feature comes amid a growing interest in customizable, DJ-like music products. In March 2025, competitor Apple Music launched its DJ With Apple Music feature, allowing users to integrate songs from the streaming service into platforms like AlphaTheta, Serato, and inMusicâs Engine DJ, Denon DJ, Numark and RANE DJ â all commonly used by DJs during sets. Start-ups like Hook and Mash-App have also recently debuted, offering users the ability to mash up, speed up and slow down songs in their library. In February, Bloomberg reported that Spotify was working toward a new superfan service that will include âremixing toolsâ along with high fidelity audio, concert tickets and more. Itâs unclear if Mix With Spotify is what Bloomberg was referencing, but itâs certainly a step toward Spotify integrating more playful, customizable features. As Bob Moczydlowsky, Techstars managing director, predicted to Billboard in 2023: âIf streaming 1.0 was about making all the music play, Streaming 2.0 should be about being able to play with all the music.â Trending on Billboard The video player is currently playing an ad. To start using Mix With Spotify, users can select between its Custom or Auto mix options, allowing users to have as much or as little control over their transitions as they want. Friends can also collaborate on their mixes in shared playlists, allowing everyone to edit transitions together. When a user starts the mixing process, Spotify will automatically show the key and BPM of each track in the playlist so that users can scan and reorder the playlist to ensure the best flow between songs. After the transitions are made, Spotify also offers a feature to customize the playlist cover art with new stickers and labels that are available only for mixed playlists.


Most of the internet is out of your reach, but the barrier isn't just algorithms. In another language, the same platforms turn into whole other worlds. When you go online, it feels like you're accessing all the world's information. But you form social media relationships based on shared language. You search Google with the language you think in. And algorithms built to maximise attention have no reason to recommend what you won't understand. So, most of the internet remains out of sight, on the other side of a language filter â and you're missing far more than content. Most internet activity is concentrated on a small number of large platforms, and from our linguistically siloed perspectives, it's easy to assume that everyone uses them in similar ways. But why should that be true? We expect music, literature and cuisine to vary between cultures, after all, so why not the internet? In a new paper, our team at the University of Massachusetts Amherst's Initiative for Digital Public Infrastructure has uncovered stark differences in how different cultures harness the internet. With more research, it may reshape how we think about the services that dominate the web. We're only just beginning to understand the implications. We may be seeing a different kind of attention economy, less about mass reach, more about small, meaningful engagement. It may be a sign of something more intimate, and perhaps even more human The history of the internet offers some examples. Take the Russian social media/blogging platform LiveJournal. When it was popular in the mid-2000s, English-speaking users knew it as a space for young people to share their feelings or geek out about Harry Potter. But if you're a Russian speaker, you probably know LiveJournal very differently â as an important site of public intellectualism and political discourse, playing a rare role in hosting voices from the opposition. With the biggest technology companies based in the US, a cultural blind spot has emerged where we often assume that the English internet is representative of the rest of the world. Research about YouTube in particular has a significant English-speaking bias â typically written in English, published in English-speaking countries and focused on English-language videos. Comment & analysis Ryan McGrady is a senior research fellow at the University of Massachusetts Amherst's Initiative for Digital Public Infrastructure. The internet's leading platforms are more difficult to study than you might think. Computers can blaze through text, but video is harder to parse at scale. Platforms like YouTube, the world's most popular video service, don't offer tools to create the large representative samples necessary to understand the platform as a whole, or big swaths of it like linguistic communities. As a result, YouTube is often understood through the easily accessible tip of the iceberg: its most popular videos. Between the language bias and this popularity bias, when users, creators, academics, educators, parents, teachers and even policymakers talk about platforms like YouTube, we're typically just talking about the part that's most visible to us â a small, unrepresentative piece of it. (For more, read Thomas Germain's story on the hidden world beneath the shadows of YouTube's algorithm.) So, how do you study what's under the surface? A couple years ago, we came up with a way to do what YouTube's tools couldn't: we randomly guessed the URLs of videos â more than 18 trillion times â until we had enough videos to paint a picture of what's really happening on YouTube. What we put together was a first-time look at the inner workings of one of the most influential websites on earth. With a large enough representative sample, we could begin making broader comparisons. How do videos uploaded in 2019 compare to videos uploaded in 2021? Do videos of animals get more comments than videos of sports? What kinds of things can we see when we compare popular videos to those with just a handful of views? Getty Images Radical differences in cultural norms point to a brand-new understanding of what's happening online (Credit: Getty Images)Getty Images Radical differences in cultural norms point to a brand-new understanding of what's happening online (Credit: Getty Images) Most of all, we wanted to explore linguistic differences: how language and culture shape online participation at a global scale. So, in 2024 we examined language-specific samples of English, Hindi, Russian and Spanish YouTube, working with native speakers to validate our language detection tools. Our goal was to take a high-level view of YouTube in each language to look for broad patterns. We had to acknowledge that YouTube might be just as simple as many people assume: more or less the same across languages. But that's not what we found. Each language varies in multiple dimensions, but one corner of the platform stood out. In short, Hindi YouTube is radically different from its counterparts. It seems like Hindi users are relating to each other with rhythms and dynamics we didn't see in any other block, and buried in the numbers, we could see the story of major geopolitical conflict. Let's start with growth. The chart below shows how much of each language was uploaded per year from 2014 to 2023. All four are growing rapidly, but more than half of all Hindi YouTube videos were uploaded in 2023 alone. University of Massachusetts at Amherst The growth of YouTube videos in different languages shows a splintering in the paths of cultural evolution (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The growth of YouTube videos in different languages shows a splintering in the paths of cultural evolution (Credit: University of Massachusetts at Amherst) Then there's length. Spanish videos are a little longer than the rest, with a median of about two-and-a-half minutes. English isn't far behind at nearly two minutes and Russian at one minute 38 seconds. But the median Hindi YouTube video is just 29 seconds long. These details might sound like interesting quirks â but they're actually a reflection of India's internet history. TikTok was incredibly popular in India, long before the app exploded in the US and Europe, but that all changed after India banned the app amid border clashes with China in 2020. Overnight, hundreds of millions of users were cut off from their videos, comments, businesses and self-expression. YouTube rushed in to fill the void, making India the first market for YouTube Shorts, a feature the company built to highlight the short-form vertical video format that made TikTok famous. It looks to have been successful. More than half of Hindi YouTube â 58% â is made up of Shorts, compared to just 25-31% for the other languages. In many countries, Shorts is just a TikTok clone, but it's become a much larger ecosystem in India. The influence of TikTok and Shorts shows up in other ways, too. The next chart focuses on videos 30 seconds and less, showing what portion of each language's videos are one second long, two seconds long, etc. There is a spike across all languages (though particularly extreme in Hindi) at 15 seconds, a default length for TikTok, then adopted as a default for Shorts. University of Massachusetts at Amherst The rise of TikTok seems to have inspired a spike in 15 second videos, but the differences are dramatic when you compare languages (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The rise of TikTok seems to have inspired a spike in 15 second videos, but the differences are dramatic when you compare languages (Credit: University of Massachusetts at Amherst) Terms like "median duration by language" may seem dry, but here, they hint at a sea change in the way people use video in many parts of the world. Next, we found a telling difference in how people described their own videos. YouTube asks people to categorise their videos. Most users don't bother to change the default, People & Blogs. But when we excluded that, the differences between languages grew sharper. You can see this in the last chart below. In Russian, gaming videos dominate. It's the most popular category in English and Spanish, too. But in Hindi, Entertainment and Education are on top. And for all the attention English-language political content gets in the popular discourse, English has the smallest number of videos in the "News and Politics" category. These category labels are more than metadata. They're a look at how different cultures use the platform for different purposes. What we're seeing is parallel internets shaped by local needs, expectations and norms. But this data suggests something different: people in different linguistic communities aren't just making different videos and engaging with them differently, they may be using YouTube for completely different reasons. Finally, we looked at popularity metrics â views, likes and comments â and once again, Hindi YouTube was an outlier. It demonstrated extreme inequality. Just 0.1% of Hindi videos accounted for 79% of views (the other languages ranged from 54% to 59%). But there's an interesting twist. Those less popular videos were far more likely to have likes. That suggests something deeper. On Hindi YouTube, even the videos that aren't being seen are being appreciated and acknowledged. Our new research suggests YouTube in India may often be used like a video messaging service to talk to friends and family, with public videos often intended for a private audience. University of Massachusetts at Amherst The categories linguistic groups use to tag their videos suggests people use YouTube in meaningfully different ways (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The categories linguistic groups use to tag their videos suggests people use YouTube in meaningfully different ways (Credit: University of Massachusetts at Amherst) We think some of these differences can be explained by how the internet has been adopted in India, and the country's TikTok inheritance. This may be a different kind of attention economy, less about mass reach, more about small, meaningful engagement. It may be a sign of something more intimate, and perhaps even more human. More like this: âą How Google trained you to stop clicking âą The YouTube statistics Google doesn't want you to see âą Is Google about to destroy the web? We still have a lot of work to do, and a lot of videos to watch, before we can make these claims definitively. But what's already clear is that language doesn't just shape your view of digital life â it can obscure the diverse, culturally specific ways people use these platforms. We're building businesses, journalism and regulation on an artificially limited view of the internet, one often filtered through English, popularity and convenience. It's time we looked deeper.

Caltech scientists have created a hybrid quantum memory that converts electrical information into sound, allowing quantum states to last 30 times longer than in standard superconducting systems. Their mechanical oscillator, like a microscopic tuning fork, could pave the way for scalable and reliable quantum storage. Quantum Bits vs. Classical Bits While traditional computers rely on bits, the basic units of information that can only be 0 or 1, quantum computers operate with qubits. Unlike ordinary bits, qubits can exist as both 0 and 1 at the same time. This unusual behavior, a quantum physics effect called superposition, is what gives quantum computing its extraordinary potential to solve problems that are far beyond the reach of conventional machines. Most quantum computers today are built using superconducting electronic systems, where electrons move without resistance at extremely low temperatures. Within these systems, carefully engineered resonators allow electrons to form superconducting qubits. These qubits excel at carrying out fast, complex operations, but they are not well-suited for long-term storage. Preserving information in the form of quantum states (mathematical descriptions of specific quantum systems) remains a major challenge. To address this, researchers have been working on creating âquantum memoriesâ that can hold quantum information far longer than standard superconducting qubits. Using Sound to Remember Quantum Information A scanning electron microscope image highlighting a single mechanical oscillator, âtuning fork,â from the new work. The false-colored golden lines in the image indicate the location of electrodes that transfer electrical signals between the superconducting qubit and the mechanical oscillator. Credit: Omid Golami Extending Quantum Memory with Sound A team at Caltech has now developed a new hybrid method to extend quantum memory. By converting electrical signals into sound, they enabled quantum states from superconducting qubits to remain stable for up to 30 times longer than with earlier approaches. The research, led by graduate students Alkim Bozkurt and Omid Golami under the supervision of Mohammad Mirhosseini, assistant professor of electrical engineering and applied physics, was published in Nature Physics. âOnce you have a quantum state, you might not want to do anything with it immediately,â Mirhosseini says. âYou need to have a way to come back to it when you do want to do a logical operation. For that, you need a quantum memory.â Harnessing Sound for Quantum Storage Previously, Mirhosseiniâs group showed that sound, specifically phonons, which are individual particles of vibration (in the way that photons are individual particles of light) could provide a convenient method for storing quantum information. The devices they tested in classical experiments seemed ideal for pairing with superconducting qubits because they worked at the same extremely high gigahertz frequencies (humans hear at hertz and kilohertz frequencies that are at least a million times slower). They also performed well at the low temperatures needed to preserve quantum states with superconducting qubits and had long lifetimes. Now Mirhosseini and his colleagues have fabricated a superconducting qubit on a chip and connected it to a tiny device that scientists call a mechanical oscillator. Essentially a miniature tuning fork, the oscillator consists of flexible plates that are vibrated by sound waves at gigahertz frequencies. When an electric charge is placed on those plates, the plates can interact with electrical signals carrying quantum information. This allows information to be piped into the device for storage as a âmemoryâ and be piped out, or âremembered,â later. Storage Times Far Exceed Expectations The researchers carefully measured how long it took for the oscillator to lose its valuable quantum content once information entered the device. âIt turns out that these oscillators have a lifetime about 30 times longer than the best superconducting qubits out there,â Mirhosseini says. This method of constructing a quantum memory offers several advantages over previous strategies. Acoustic waves travel much slower than electromagnetic waves, enabling much more compact devices. Moreover, mechanical vibrations, unlike electromagnetic waves, do not propagate in free space, which means that energy does not leak out of the system. This allows for extended storage times and mitigates undesirable energy exchange between nearby devices. These advantages point to the possibility that many such tuning forks could be included in a single chip, providing a potentially scalable way of making quantum memories. The Path Forward Mirhosseini says this work has demonstrated the minimum amount of interaction between electromagnetic and acoustic waves needed to probe the value of this hybrid system for use as a memory element. âFor this platform to be truly useful for quantum computing, you need to be able to put quantum data in the system and take it out much faster. And that means that we have to find ways of increasing the interaction rate by a factor of three to 10 beyond what our current system is capable of,â Mirhosseini says. Luckily, his group has ideas about how that can be done. Reference: âA mechanical quantum memory for microwave photonsâ by Alkım B. Bozkurt, Omid Golami, Yue Yu, Hao Tian and Mohammad Mirhosseini, 13 August 2025, Nature Physics. DOI: 10.1038/s41567-025-02975-w Additional authors of the paper are Yue Yu, a former visiting undergraduate student in the Mirhosseini lab; and Hao Tian, an Institute for Quantum Information and Matter postdoctoral scholar research associate in electrical engineering at Caltech. The work was supported by funding from the Air Force Office of Scientific Research and the National Science Foundation. Bozkurt was supported by an Eddleman Graduate Fellowship.

The discovery could challenge current ideas about how galaxies formed in the early universe. Comments (5) When you purchase through links on our site, we may earn an affiliate commission. Hereâs how it works. A series of red bubble looking spheres over a dark, starry background with four white cutout squares in the front enlarging four of the bubbles to show glowing balls of red light in each of the bubbles. Hundreds of unusually bright early galaxy candidates have been identified in deep-field images from NASA's James Webb Space Telescope. (Image credit: Bangzheng "Tom" Sun) Hundreds of unexpectedly energetic objects have been discovered throughout the distant universe, possibly hinting that the cosmos was far more active during its infancy than astronomers once believed. Using deep-field images from NASA's James Webb Space Telescope (JWST), researchers at the University of Missouri identified 300 unusually bright objects in the early universe. While they could be galaxies, astronomers aren't yet sure what they are for certain. Galaxies forming so soon after the Big Bang should be faint, limited by the pace at which they could form stars. Yet these candidates shine far brighter than current models of early galaxy formation predict. "If even a few of these objects turn out to be what we think they are, our discovery could challenge current ideas about how galaxies formed in the early universe â the period when the first stars and galaxies began to take shape," Haojing Yan, co-author of the study, said in a statement from the university. You may like White diamond symbols mark the locations of 20 of the 83 newfound young, low-mass galaxies undergoing intense star formation when the universe was just 800 million years old. Despite their small sizes, these galaxies are powerful sources of ultraviolet radiation, making them strong candidates for helping reionize the early universe. Tiny galaxies may have helped our universe out of its dark ages, JWST finds (Main) an illustration of a highly redshifted early galaxy (Inset) the earliest galaxy ever detected MoM z14 'Cosmic miracle!' James Webb Space Telescope discovers the earliest galaxy ever seen Swirls of purple light surround a cluster of glowing purple spheres against a dark background Astronomers find bizarre 'Cosmic Grapes' galaxy in the early universe. Here's why that's a big deal (photo) Click here for more Space.com videos... To discover these objects, the team applied a method called the "dropout" technique, which detects objects that appear in redder wavelengths but vanish in bluer, shorter-wavelength images. This indicates the objects are extremely distant, showing the universe as it was more than 13 billion years ago. To estimate distances, the team analyzed the objects' brightnesses across multiple wavelengths to infer redshift, age and mass. JWST's powerful Near-Infrared Camera and Mid-Infrared Instrument are designed to detect light from the farthest reaches of space, making them ideal for studying the early universe. "As the light from these early galaxies travels through space, it stretches into longer wavelengths â shifting from visible light into infrared," Yan said in the statement. "This stretching, called redshift, helps us determine how far away these galaxies are. The higher the redshift, the closer the galaxy is to the beginning of the universe." Next, the researchers hope to use targeted spectroscopic observations, focusing on the brightest sources. Confirming the newly found objects as genuine early galaxies would refine our current understanding of how quickly the first cosmic structures formed and evolved â and add to the growing list of transformative discoveries made by the JWST since it began observing the cosmos in 2022.

Bbies stranded in space, zombie football players, and soap operas with cats â we live in an era when YouTube is almost flooded with videos created by artificial intelligence. The sharp rise of such channels means that a significant amount of content is now generated by AI, not by a person behind the camera. According to analytics, among the hundred fastest-growing channels in the past month, nine published purely AI videos. Examples include plots with a baby being crammed into a rocket just before liftoff into space, unusual images of sports stars, and melodramas featuring anthropomorphic cats. The popularity of such materials is rising as powerful video-creation tools emerge, including Veo 3 and Grok Imagine. The total subscriber count on these channels runs into the millions: about 1.6 million on the baby-in-space channel and nearly 4 million on the anthropomorphic-cat channel, where intrigue sometimes runs to the extremes. Most of these videos fall into the âAI slopâ category â mass production of low-quality content, sometimes surreal or eerie, but sometimes with a fairly simple plot, indicating growing technical sophistication of AI content. «All content uploaded to YouTube falls under our community guidelines â regardless of how it was created.» â YouTube representative After inquiries to the platform, it was reported that some channels were removed, while others had monetization restricted. The exact numbers and names were not specified. A digital culture expert described AI-video generators as the next wave of âpollutionâ of the Internet, a term first proposed by a writer who voiced concerns about the quality of online content. According to him, AI content could undermine usersâ trust by substituting high-quality material with quickly generated âgarbageâ versions. «AI slop fills the Internet with content that is essentially garbage. This pollution undermines online communities on Pinterest, competes for revenue with artists on Spotify, and fills YouTube with low-quality content.» â Dr. Akhil Bhardwaj, University of Bath He also added that one way of regulating is banning monetization of AI content, which would reduce the incentive to create it. Such a step could make the platform less attractive for mass production of AI videos. Ryan Broderick, author of the Garbage Day newsletter, sharply criticizes the impact of AI videos on YouTube, calling the platform a âdumpsterâ for troubling, cold AI clips and dubious content overall. Alongside YouTube, Instagram is also experiencing an AI-content flow: Reels with hybrid heads of celebrities on animal bodies have drawn millions of views â for example, videos featuring various characters and plots with famous images. TikTok is also seeing viral AI videos: in particular, funny scenarios with stories and cats competing in unusual contests. Meanwhile, platforms require labeling of realistic AI videos, using deepfake-detection systems to reduce the risks of misinformation and manipulation.

With Meta, Google, Samsung, and maybe even Apple working on AI-powered glasses, smart spectacles are quickly becoming the hottest gadget in tech. Now, even HTC is jumping in on the trend with a new pair of Vive Eagle smart glasses that come with built-in speakers, a 12MP ultrawide camera, and an AI voice assistant. The Vive Eagle glasses are currently only available for purchase in Taiwan, but they seem like a direct rival to Metaâs Ray-Ban and Oakley smart glasses. They come with AI-powered image translation, which lets wearers ask the Vive AI voice assistant to translate what theyâre seeing into 13 different languages. Other features include the ability to record reminders, ask for restaurant recommendations, and take notes (sound familiar?). HTC says its Vive Eagle glasses weigh just 49 grams, around the same as Metaâs Ray-Ban smart glasses. The Vive Eagle glasses cost around $520 USD and come equipped with Zeiss sun lenses, with options for a red, brown, gray, or black frame. Itâs not clear when â or if â HTC plans on bringing these smart glasses to North America or Europe, but Meta might have some competition if it does.

Apple is still hard at work on becoming a relevant player in AI. The latest missive from Mark Gurman at Bloomberg suggests that Apple is shifting its artificial intelligence goals to center on new device segments. Sources reportedly told the publication that Apple has a slate of new smart home products in the works that could help pivot its lagging AI strategy. The center of the new lineup is a tabletop AI companion that has been described as an iPad on a movable robotic arm. It would be able to swivel to face the screen toward a user as they move around their home or office. Sources said the current prototype uses a horizontal display that's about seven inches while the motorized arm can move the screen about six inches away from the base in any direction. Equipped with a long-promised overhaul to the Siri voice assistant, this device could act like an additional person, recalling information, making suggestions and participating in conversations. According to Bloomberg, Apple is targeting a 2027 release for this product. Apple's new lineup is also rumored to include a smart home hub that is a simpler version of the robotic friend with no moving stand. We might be seeing this sooner, with a projected 2026 release for the device. This hub device would be able to control music playback, take notes, browse the web and host videoconferencing. Both the robot companion and the smart home hub are reportedly running a new operating system called Charismatic that's designed to support multiple users. The Siri running on the device will be given a particularly cheery personality, and it may also be getting a visual representation. Bloomberg's sources said there hasn't been a final decision on aesthetics; internal tests have had Siri looking like an animated Finder icon and like a Memoji. Today's scuttlebutt follows on previous reports from Gurman that pointed to Apple's interest in these categories. The idea of a smart home hub was apparently floated at the company as far back as 2022, and it's finally being rumored to have a formal debut some time this year. Robots have also been a topic of interest in Cupertino for some time, with claims that Apple was developing a personal robot dating back at least to last spring. While this Bloomberg piece offers more detail about those hypothetical plans, there's always a chance Apple will change direction or scrap a project.

Google has been in a strange place with autofill for some time. While Google has a comprehensive password manager that spans Android and Chrome, I've never found it as seamless as I might want it to be. That seems to be the focus of the latest update heading to your Google phone. According to a report from 9to5Google, Gboard (the default keyboard on Pixel and my recommendation for all other Android phones too) is going to get a proper shortcut into autofill. Currently, when you go to enter a password you might see a line across the top of the keyboard suggesting some of the passwords you have for that site or app. I've always found this to be hit and miss, with some apps never getting a suggestion and some working perfectly fine. Latest Videos From T3 You may like Google Pixel 9 A Google Pixel update is coming to remind us why we own a phone in the first place Gmail app Android users getting a Gmail upgrade so useful we wonder why it's never happened before Google Pixel 9 Google Pixel 10 leak suggests a hidden upgrade as well as the obvious ones Google Discover on desktop Google Discover finally coming to desktop too â now we'll never get any work done In the future, however, Gboard will ask you if you want to "Use Autofill with Google". This will make a shortcut available, which you can add to the top row of the keyboard. You can then tap that when you land on a site or app that you need autofill details for. Google Pixel 9 (Image credit: Future / Chris Hall) When you tap on the shortcut, you'll have the option to access passwords or payment details, so you can fill in the necessary and get on with your day. However, according to the details, it only shows passwords that are applicable to the property that you're currently trying to access, which won't help if, for example, if the login destination is slightly different. This sometimes happens when a company changes how its login is structured, or if you saved the password when signing into the app and you're now trying to sign into the website, which might identify differently. Although this should be a better way to force the issue, rather than relying on Gboard's current and slightly temperamental offering, the lack of wider searching means that if you can't find the credentials you want, you'll have to go back to the old method of manually searching and using copy-paste. Sign up to the T3 newsletter for smarter living straight to your inbox Get all the latest news, reviews, deals and buying guides on gorgeous tech, home and active products from the T3 experts Your Email Address Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. As for payment, I've found that generally Google Pay works very well, but there are some apps and websites where it just doesn't work and I still find myself manually plugging in the details (the desktop offering through Chrome seems to work much better). Having the option to force-fill payment details could make life a lot easier. 9to5Google reports that this new function is available in the Gboard beta, but it hasn't appeared on my device, so there could be some regional factors at play here too. It sounds promising, although there's no avoiding that autofill on Android is still a bit messy â hopefully, these changes will make the experience better.

Reddit says that it has caught AI companies scraping its data from the Internet Archiveâs Wayback Machine, so itâs going to start blocking the Internet Archive from indexing the vast majority of Reddit. The Wayback Machine will no longer be able to crawl post detail pages, comments, or profiles; instead, it will only be able to index the Reddit.com homepage, which effectively means Internet Archive will only be able to archive insights into which news headlines and posts were most popular on a given day. âInternet Archive provides a service to the open web, but weâve been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine,â spokesperson Tim Rathschmidt tells The Verge. The Internet Archiveâs mission is to keep a digital archive of websites on the internet and âother cultural artifacts,â and the Wayback Machine is a tool you can use to look at pages as they appeared on certain dates, but Reddit believes not all of its content should be archived that way. âUntil theyâre able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) weâre limiting some of their access to Reddit data to protect redditors,â Rathschmidt says. The limits will start âramping upâ today, and Reddit says it reached out to the Internet Archive âin advanceâ to âinform them of the limits before they go into effect,â according to Rathschmidt. He says Reddit has also âraised concernsâ about the ability of people to scrape content from the Internet Archive in the past. Reddit has a recent history of cutting off access to scraper tools as AI companies have begun to use (and abuse) them en masse, but itâs willing to provide that data if companies pay. Last year, Reddit struck a deal with Google for both Google Search and AI training data early last year, and a few months later, it started blocking major search engines from crawling its data unless they pay. It also said its infamous API changes from 2023, which forced some third-party apps to shut down, leading to protests, were because those APIs were abused to train AI models. Reddit also struck an AI deal with OpenAI, but it sued Anthropic in June, claiming Anthropic was still scraping from Reddit even after Anthropic said it wasnât scraping anymore. âWe have a longstanding relationship with Reddit and continue to have ongoing discussions about this matter,â Mark Graham, director of the Wayback Machine, says in a statement to The Verge.

(CN) â A Southern California man sued Microsoft on Thursday over the software giant's plan to discontinue support for the old version of its widely used operating system Windows. Though Windows 11 was launched nearly four years ago, many of its billion or so worldwide users are clinging to the decade-old Windows 10. In fact, the newer Windows only just recently overtook its predecessor, in July. According to StatCounter, nearly 43% of Windows users still use the old version on their desktop computers. The bad news for them is that Microsoft is discontinuing its routine support for Windows 10 in nearly two months on Oct. 14. Not that computers running Windows 10 will completely stop working on that day. But they will no longer receive new features or security updates. The plaintiff, Lawrence Klein, says in his complaint filed in San Diego Superior Court, that he owns two laptops, both of which run Windows 10. Both laptops, he says in his complaint, will become obsolete in October, when Microsoft ends support for Windows 10. Klein says that the end of Windows 10 is part of Microsoft's strategy to force customers to purchase new devices and to "monopolize the generative AI market." Windows 11 comes with Microsoft's suite of generative artificial intelligence software, including the chatbot Copilot. To run optimally, Microsoft's AI needs a piece of hardware called a neural processing unit, which newer tablets, laptops and desktop computers have â and which the older devices do not. "With only three months until support ends for Windows 10, it is likely that many millions of users will not buy new devices or pay for extended support," Klein writes in his complaint. "These users â some of whom are businesses storing sensitive consumer data â will be at a heightened risk of a cyberattack or other data security incident, a reality of which Microsoft is well aware." "In other words, Microsoftâs long-term business strategy to secure market dominance will have the effect of jeopardizing data security not only of Microsoftâs customers but also of persons who may not use Microsoftâs products at all," he adds. Although the Windows 11 upgrade is free, an estimated 240 million personal computers don't have the right hardware to run the new operating system. And without security updates, they will be increasingly vulnerable to malware and viruses. Those customers will have the option of extended security, which will last until 2028, but at a price: $30 for individuals and $61 per device for businesses, increasing to $244 by the third year. According to one market analyst writing in 2023, Microsoft's shift away from Windows 10 will lead millions of customers to buy new devices and thrown out their old ones, consigning as many as 240 million PCs to the landfill. "If these were all folded laptops, stacked one on top of another, they would make a pile 600km taller than the moon," the analyst wrote. Klein is asking a judge to order Microsoft to continue supporting Windows 10 without additional charge, until the number of devices running the older operating system falls bellow 10% of total Windows users. He says nothing about any money he seeking for himself, though it does ask for attorneys' fees. Microsoft did not respond to an email requesting a comment.

Do you have a meditation app on your smartphone, computer or wearable device? Well, you're not alone. There are now thousands of meditation apps available worldwide, the top 10 of which have been collectively downloaded more than 300 million times. What's more, early work on these digital meditation platforms shows that even relatively brief usage can lead to benefits, from reduced depression, anxiety, and stress to improved insomnia symptoms. "Meditation apps, such as Calm and Headspace, have been enormously popular in the commercial market," said J. David Creswell, a health psychologist at Carnegie Mellon University and lead author of a review paper on meditation apps, published in the journal American Psychologist. "What they're doing now is not only engaging millions of users every day, but they're also creating new scientific opportunities and challenges." One huge boon provided by meditation apps for users is access. "You can imagine a farmer in rural Nebraska not having many available opportunities to go to traditional group-based meditation programs, and now they have an app in their pocket which is available 24/7," said Creswell, who is the William S. Dietrich II Professor in Psychology and Neuroscience. Meditation apps also provide scientists with opportunities to scale up their research. "Historically, I might bring 300 irritable bowel syndrome patients into my lab and study the impacts of meditation on pain management," said Creswell. "But now I'm thinking, how do we harness the capacity of meditation apps and wearable health sensors to study 30,000 irritable bowel syndrome patients across the world?" Combined with products that measure heart rate and sleep patterns, such as Fitbit and the Apple Watch, meditation apps now also have the capacity to incorporate biometrics into meditation practices like never before. The biggest takeaway, though, is that meditation apps are fundamentally changing the way these practices are distributed to the general public. Scientific studies of use patterns show that meditation apps account for 96% of overall users in the mental health app marketplace. "Meditation apps dominate the mental health app market," said Creswell. "And this paper is really the first to lay out the new normal and challenge researchers and tech developers to think in new ways about the disruptive nature of these apps and their reach." Meditation apps challenge users to train their minds, in small initial training doses As with in-person meditation training, meditation apps start by meeting users where they are. Introductory courses may focus on breathing or mindfulness, but they tend to do so in small doses, the merits of which are still being debated. According to the data, just 10 to 21 minutes of meditation app exercises done three times a week is enough to see measurable results. "Of course, that looks really different from the daily meditation practice you might get within an in-person group-based meditation program, which might be 30 to 45 minutes a day," said Creswell. The a la carte nature of meditation through a smartphone app may appeal to those pressed for time or without the budget for in-person coaching sessions. Users may also find it comforting to know that they have access to guided meditation on-demand, rather than at scheduled places, days, and times. "Maybe you're waiting in line at Starbucks, and you've got three minutes to do a brief check-in mindfulness training practice," said Creswell. Finally, as meditation apps continue to evolve, Creswell believes integration of AI, such as meditation-guiding chat-bots, will only become more common, and this will offer the option of even more personalization. This could mark an important development for meditation adoption at large, as offerings go from one-size-fits all group classes to training sessions tailored to the individual. "People use meditation for different things, and there's a big difference between someone looking to optimize their free-throw shooting performance and someone trying to alleviate chronic pain," said Creswell, who has trained Olympic athletes in the past. The elephant in the room Of course, with new technology comes new challenges, and for meditation apps, continued engagement remains a huge problem. "The engagement problem is not specific to meditation apps," said Creswell. "But the numbers are really sobering. Ninety-five percent of participants who download a meditation app aren't using it after 30 days." If the meditation app industry is going to succeed, it will need to find ways to keep its users engaged, as apps like Duolingo have. But overall, Creswell said the market demand is clearly there. "People are suffering right now. There are just unbelievably high levels of stress and loneliness in the world, and these tools have tremendous potential to help," he said. "I don't think there is ever going to be a complete replacement for a good, in-person meditation group or teacher," said Creswell. "But I think meditation apps are a great first step for anyone who wants to dip their toes in and start training up their mindfulness skills. The initial studies show that these meditation apps help with symptom relief and even reduce stress biomarkers."

All app developers will likely have a brief window to prepare their apps for the new Siri before it launches to the public through Appleâs developer beta program. For now, Apple includes a note on its developer documentation for App Intents, which has existed for Shortcuts, Spotlight, widgets, and more for years. Appleâs note on Siri integration reads: Siriâs personal context understanding, onscreen awareness, and in-app actions are in development and will be available with a future software update. Meanwhile, we already have an idea of some of the apps that may have a head start on working with the new Siri. According to Mark Gurman, Apple is working with companies behind these eight popular iOS apps to develop App Intents and the new Siri: Uber AllTrails Threads Temu Amazon YouTube Facebook WhatsApp Additionally, Gurman reports that Apple has even tested the upgraded Siri with âa few gamesâ which may come as a surprise. Apple is developing the system with its own apps, too, of course. This detail is included in an update about the state of Appleâs delayed Siri upgrade, which Gurman continues to say could arrive as early as spring next year. Apple originally announced the new system at WWDC in 2024 and planned to release it by last spring. The biggest developmental challenge seems to have been working toward a hybrid approach that used new large language models in addition to the legacy parts of the old Siri. Amazon looked like it might beat Apple to the race after facing a similar strategy reset. Early reviews of the new Alexa+ system on select Echo devices find it less reliable at performing simple tasks the old system handles well.

Porsche has taken the wraps off two updated or new cars for the 2026 racing season: an optimized version of its 911 GT3 R for customers, and an all-new 911 Cup car (baed on type 992.2). The 911 GT3 R formally premiered today while the 911 Cup car was revealed in testing livery last month. Releases for both are below. â Newporsche911 2026 Gt3r Porsche customer teams can rely on optimized 911 GT3 R from 2026 Refined aerodynamics, improved details, easier handling and driveability Development driven by predecessorâs 500+ real-world race starts Track debut under competitive race conditions successfully completed Upgrade package available for existing Porsche 911 GT3 R vehicles Porsche is set to launch a further refined version of the successful 911 GT3 R for the 2026 season. The new GT3 race car incorporates a range of detailed optimisations, including reworked aerodynamics. Porsche Motorsport will offer the next-generation 911 GT3 R to customer teams at a price of 573,000 Euros, excluding country-specific VAT and optional extras. Stuttgart. Since its debut at the start of 2023, the current Porsche 911 GT3 R has built an impressive track record, with numerous victories and titles from more than 500 race starts worldwide. In the past season alone, customer teams secured the unofficial GT3 World Championship for manufacturers in the Intercontinental GT Challenge and claimed first place in all three GTD PRO (Grand Touring Daytona Pro) classifications of the IMSA WeatherTech SportsCar Championship. In the NĂŒrburgring Langstrecken-Serie (NLS), the current GT3 R â of which Porsche Motorsport has delivered 106 units to customer teams to date â took the chequered flag first in six of the eight races on the legendary Nordschleife. The car also won the inaugural Endurance Trophy for LMGT3 teams and drivers in the FIA World Endurance Championship (WEC), including a class victory at the 24 Hours of Le Mans, triumphing over eight rival sports car manufacturers. This season, the up to 416 kW (565 PS) Porsche 911 GT3 R remained unbeaten at the French classic again. In the DTM, 2023 champion Thomas Preining reignited his 2025 title challenge with a recent victory at the Norisring. The newly evolved race car, refined by Porsche Motorsport in numerous key areas, now follows in the footsteps of its highly successful predecessor. The primary focus of the evolution was on optimising suspension and aerodynamics, with the goal of achieving an even more balanced handling and improved drivability â particularly for non-professional drivers â even under variable conditions. âOur focus for this update was on optimisation. Small changes can make a big difference when built on a solid, proven foundation,â says Sebastian Golz, Project Manager Porsche 911 GT3 R. âDriver feedback after the first race outing during the development phase in April confirmed our direction. Weâre confident this evolution will allow our customer teams to continue competing successfully across the globe.â Michael Dreiser, Director Sales Porsche Motorsport, adds: âThe Porsche 911 GT3 Râs record of more than 420 podium finishes says it all. It crowns our range of GT customer racing cars. Together with the 718 GT4 RS Clubsport, which represents the ideal entry point into international GT racing, this new evolution offers a strong overall package for the 2026 season. The option to upgrade existing 911 GT3 R models via an update kit also represents an attractive solution for our customer teams.â Chassis and aerodynamics optimisations for improved braking stability The most striking visual feature of the new 911 GT3 R is the addition of ventiducts on the upper side of the front wheel arches. These so-called âlouvresâ significantly contribute to improved aerodynamics. Coupled with the optimised kinematics of the double wishbone front suspension, which provides an anti-dive effect by enhancing force resistance, the louvres help to counteract front-end compression during deceleration, thereby maintaining aerodynamic balance. This reduces the tendency of the car to tilt forward during braking, also known as pitch sensitivity. As a result, the new 911 GT3 R offers more precise and predictable braking behaviour, improving overall control. At the rear, the swan-neck rear wing is equipped with a four-millimetre Gurney flap. This generates additional aerodynamic downforce and broadens the scope for aerodynamic balance adjustments. The underbody is fully enclosed and reinforced at the rear. Simultaneously, modified kinematics of the multi-link rear axle increase the anti-squat effect, reducing rear-end compression under hard acceleration. This improves dynamic load distribution between the axles. In combination with an adapted fifth-generation racing ABS from Bosch, these enhancements result in more balanced handling. Further detailed improvements are based on the extensive feedback from Porsche Motorsportâs customer teams across a wide range of racing events worldwide. For instance, the electrohydraulic power steering system now features additional fluid cooling, optimising its thermal performance and ensuring consistent steering forces, even on demanding circuits such as the NĂŒrburgring Nordschleife. New ceramic wheel bearings enhance robustness and durability, while modified centring pins simplify the installation of drive shafts. These are now cooled directly via their own air supply through NACA ducts in the side skirts, independent of the brake cooling. This improves their stability on high-speed tracks such as Monza or Le Castellet, where low ride height is critical. At the same time, the rear brake cooling system can be adjusted more precisely â an important feature for circuits like Daytona. A modified driver air vent ensures consistent air circulation within the cockpit, even during long-distance races. The RLU USB stick now offers practical advantages: this Remote Logger Unit stores the driving data of the new 911 GT3 R directly on a USB stick, which can be quickly swapped out â even during a short pit stop. This eliminates the time-consuming need to connect a laptop via cable. Numerous option packages are now included as standard equipment Porsche Motorsport offers several former non-standard optional packages for the new 911 GT3 R ex-works: the sensor package, endurance package, pit lane link package, and camera package. These kits include four laser ride-height sensors, two master brake cylinder potentiometers, a track temperature sensor, a rear-view camera, and mountings for the water bottle system. A refuelling detection sensor registers when the fuel nozzle is inserted. Together with an additional refuelling LED, this plays a key role in series such as IMSA and the FIA World Endurance Championship, as well as in the 24 Hours of Spa-Francorchamps, to ensure compliance with minimum refuelling times and energy quantities. Customer teams can still choose from a range of special equipment options that are tailored to the demands of specific GT3 series. For the FIA LMGT3 class and IMSA, for example, these include special driveshafts and, in the NLS, a modified pre-silencer similar to the LMGT3, as well as wing supports with a modified adjustment range. Successful first test outing under competitive conditions The 4.2-litre flat-six engine, which delivers up to 416 kW (565 PS) depending on its Balance of Performance (BoP) classification, and the drivetrain of the current 911 GT3 R remain largely unchanged. For existing vehicles based on the 911 generation 992, Porsche Motorsport plans to offer around 60 update kits at a unit price starting at 41,500 Euros plus country-specific sales tax. The new-generation modifications can then be installed on existing vehicles. Development of the new 911 GT3 R began in August 2024. Porsche Motorsport conducted testing both at its in-house facility in Weissach and on permanent race circuits such as Sebring, Paul Ricard, Spa-Francorchamps, and the NĂŒrburgring Nordschleife. A key test took place in mid-April under competitive conditions, when a test vehicle entered by Herberth Motorsport competed in the Michelin 12H Spa-Francorchamps on the Belgian Grand Prix circuit. Former Porsche Junior and reigning IMSA GTD PRO champion Laurin Heinrich, along with his German compatriots Ralf Bohn and Alfred Renauer, secured second place overall in the two-part race. â Newporsche911 2026 Cup The new 911 Cup â stronger performance for the successful one-make model Cup car based on the 992.2 generation 911 to take over from 2026 More power, improved lap times, simplified handling New 911 Cup will also roll off the production line at the main Zuffenhausen plant Porsche unveils the new 911 Cup â the latest evolution of its one-make cup racing car for the Porsche Mobil 1 Supercup, the various Carrera Cup championships, and other Porsche-sanctioned series. This new model will line up on the grid from the start of the 2026 season. Based on the 992.2 generation of the 911, the latest edition of the acclaimed predecessor features numerous detailed refinements. Development efforts focused on enhancing performance, maintaining reasonable operating costs, and simplifying handling for both drivers and teams. The naturally aspirated 4.0-litre six-cylinder boxer engine now delivers an increased output of 382 kW (520 PS), a ten PS increase. Porsche is offering the new racing car at a price of 269,000 euros, excluding country-specific VAT. Stuttgart. The new racing car based on the 911 for Porscheâs one-make cups and series is now officially called the 911 Cup. With this, the Stuttgart-based sports car manufacturer is streamlining and standardising the naming of its customer racing vehicles. From now on, only cars intended for open-brand racing series or specific segments will carry the âGTâ suffix combined with a number in their designation, as is the case with the new evolution of the 911 GT3 R, which also makes its debut today. The 911 Cup is largely derived from the road-approved 911 GT models and is produced alongside the series-production cars at Porscheâs main plant in Zuffenhausen. This has proven highly successful: since production began at the end of 2020, Porsche Motorsport has built 1,130 units of the current 911 GT3 Cup. To date, a total of 5,381 Porsche 911 vehicles has been produced as one-make racing cars. âLike its successful predecessors, the new 911 Cup pushes boundaries. It combines series components from our GT sports cars with pure racing technology to create a coherent and performance-based overall concept,â emphasises Thomas Laudenbach, Vice President Porsche Motorsport. âDriving the 911 Cup has always been regarded as a challenge. And we want to keep it that way because it also serves as the training platform for our Porsche Juniors. The success of this concept is evident in its countless race and championship victories.â Michael Dreiser, Director Sales Porsche Motorsport: âThe Cup race car based on the 911 is one of the best-selling racing cars in the world. Alongside the 718 GT4 RS Clubsport, it forms the demanding basis of our motorsport pyramid and is used globally in our one-make cup series. But its success extends far beyond that: the secret lies in its versatility. Cup cars regularly achieve strong overall results in endurance races, open GT competitions, and a myriad of other racing events.â Bodywork: adapted design, improved aerodynamics The 911 Cup already sets itself apart visually from its predecessor, most notably with a front end that now reflects the design of the 992.2-generation 911 GT3. The front spoiler lip is now made up of three separate parts, allowing only the damaged sections to be replaced after contact, which also helps lower packaging and shipping costs for spare parts. The removal of the daytime running lights serves a similar purpose: in the event of a collision, they can no longer damage the radiators behind them, nor do they require replacement afterwards. The fenders feature integrated louvre vents, which aid airflow through the wheel arches and enhance aerodynamic downforce on the front axle. The same effect is achieved by the aerodynamically optimised underbody, which â like in the standard model â positively influences the carâs driving dynamics. So-called turning vanes, located behind the front wheel arches, further improve airflow along the front end. The interaction of these elements results in a more responsive front axle, particularly at high speeds, allowing the driver to position the race car with greater precision ahead of each corner. The more aggressively styled rear end of the new 911 Cup has undergone a complete redesign. The swan-neck rear wing features a revised connection to the wing supports, making position adjustment and handling easier. The engine compartment cover has also been thoroughly reworked. Like almost all body components â including the doors â it is made from recycled carbon fibre fleece combined with bio-based epoxy resin. For example, off-cuts from other manufacturing processes are repurposed to produce the fleece, a measure that contributes, among other benefits, to stabilising spare parts pricing. Engine: racing engine even closer to the 911 GT3 series power unit The water-cooled, high-revving six-cylinder engine continues to rely on natural aspiration. The visceral-sounding 4.0-litre boxer engine remains based on the unit used in the Porsche 911 GT3. In its latest racing version, now delivering 382 kW (520 PS), it incorporates additional components from the series production engine, including flow-optimised individual throttle valves and camshafts with extended valve opening times. This design eliminates the need for a centrally positioned throttle valve, which in turn allows for the installation of an air restrictor â a requirement for competing in other motor racing championships. Despite the ten PS increase, the engineâs service life remains unchanged: it only requires an overhaul after 100 hours of track time. To comply with varying noise regulations depending on the racing series, circuit, and local regulations, three different exhaust systems are available. A more robust four-disc sintered metal racing clutch now handles power transmission to the sequential six-speed dog gearbox. This upgrade allows the engine speed, previously limited to 6,500 rpm during a standing start, to be increased, further enhancing the acoustic theatrics at the beginning of a race. An automatic engine restart function has also been introduced. This activates as soon as the driver depresses the clutch pedal after an accidental stall. Additionally, a new stroboscope function on the brake lights now alerts following drivers, particularly during the start phase. This replaces the previous use of the hazard warning lights for this safety application. Brakes: improved performance, extended lifespan The braking system has undergone a comprehensive upgrade. The front axle now features 380-millimetre discs, with their cross-section increased from 32 to 35 millimetres. This change allows for larger cooling channels for self-ventilation, improving heat dissipation. The background to this development: By relocating the central water cooler to the rear of the boot, cooling air can now be directed to the brakes through the central front section. Additionally, the outer diameter of the brake disc hat has been reduced, increasing the friction surface between the disc and brake pad. This results in more efficient deceleration thanks to wider brake pads, improved durability during long-distance races, and a significantly extended service life for the individual components. The Bosch M5 racing ABS will now be fitted ex-works in all 911 Cup cars. It features enhanced data processing capabilities to interpret input from the new acceleration sensor, which offers additional signal detection. The advanced software can also alert the driver in the event of a leak in either of the two brake circuits. Additionally, the brake fluid reservoir has been enlarged, making it suitable for long-distance racing. Adjusted steering stops enable the electronically assisted power steering to achieve a tighter turning radius, making manoeuvring through narrow city streets easier. The increased steering lock also allows drivers to counteract oversteer in the 911 more effectively. Cockpit: simplified operation during racing and in the pits On the subject of steering, the redesigned, now higher-quality multifunction steering wheel combines a more attractive design with practical advantages. For example, central rotary controls are used to adjust ABS intervention and traction control. The newly designed colour-illuminated control buttons improve the readability of the respective labels. The central control panel next to the seat remains easily accessible and operable for the driver, even during a race. It now features eight physical switches instead of ten. The button at the bottom right opens an additional menu page on the display, enabling a wide range of detailed settings to be adjusted from inside the car, including pit lane speed, exhaust mapping, and steering angle reset. This removes the need to connect a laptop and simplifies operations for the teams. Additional foam padding on the inside of the door crossbar offers extra protection for the driverâs arms, legs, and feet. Matthias Scholz, Director GT Racing Cars, explains: âThe new 911 Cup stands out thanks to the extensive attention to detail that has gone into its development. It is stronger, faster, yet also more practical. Component service life remains unchanged â in some cases even extended â despite the increase in performance. Where appropriate, materials have been replaced with components containing a high proportion of recycled materials. Cockpit operations have been optimised, and a range of additional electronic features allows for broader application across different racing formats.â Electronics: practical additional functions The upgraded electronics in the new 911 Cup also contribute to improved drivability. The TPMS (Tyre Pressure Monitoring System) now displays tyre air temperatures on the central dashboard display. A significantly more powerful GPS antenna replaces the previous infrared system, taking over lap time and position tracking. Proven features from its big brother, the 911 GT3 R, have also been integrated, including lap time measurement for pit lane passages and the âpre-killâ function, which automatically switches off the engine once the car comes to a standstill during pit stops. Additionally, a new electronic monitoring system for the fire extinguisher release unit now checks the charge level of the self-contained nine-volt battery. In developing the 911 Cup, Porsche Motorsport once again partnered with Michelin to create a new generation of tyres for the one-make cup car. Real-world testing was conducted at Italyâs Grand Prix circuit in Monza, the Lausitzring in Brandenburg, and Porscheâs in-house track at the Weissach Development Centre. Behind the wheel were three former Porsche Juniors: Bastian Buus, Laurin Heinrich, and Klaus Bachler, joined by seasoned racing driver Marco Seefried.

August 8, 2025 â SoundHound AI, Inc. (NASDAQ: SOUN), a leader in voice artificial intelligence, saw its stock price soar by over 20% in premarket trading on Friday following the release of its second-quarter earnings, which analysts are calling the company's "best quarter ever." The rally comes on the heels of a stellar performance that exceeded Wall Street expectations and showcased significant growth in the adoption of its AI-driven solutions. Record Revenue and Strong Growth SoundHound AI reported a remarkable Q2 revenue of $42.7 million, a 217% year-over-year increase, far surpassing the consensus analyst estimate of $32.9 million. The company also posted an adjusted loss of $0.03 per share, beating expectations of a $0.05 per-share loss. This robust performance was driven by strong demand for SoundHoundâs conversational AI technologies across multiple sectors, including automotive, restaurants, and enterprise customer service. The company highlighted its expansion into new verticals, such as healthcare and hospitality, as a key factor in its growth. Upbeat Guidance Fuels Investor Optimism Adding to the bullish sentiment, SoundHound raised its full-year revenue forecast to a range of $160 million to $178 million, up from its previous guidance of $157 million to $177 million. This revised outlook suggests annual sales growth of approximately 99.5% at the midpoint, signaling confidence in continued momentum. The companyâs CFO, Nitesh Sharan, emphasized during the earnings call that recent investments are yielding high returns, with a clear path to profitability in the near term. Strategic Wins and Industry Validation SoundHoundâs success in Q2 was underpinned by significant client wins and partnerships. The companyâs AI agents have been adopted by major brands, including a nationwide pizza chain and established names like Chipotle Mexican Grill and Caseyâs General Stores. Additionally, SoundHoundâs technology has been instrumental in reducing customer service workloads, as evidenced by its work with insurer Apivia Courtage, where AI agents handled over 100,000 customer inquiries, cutting inbound requests by 20%. These developments highlight the growing demand for voice AI solutions in automating and enhancing customer experiences. Market Reaction and Analyst Sentiment The market responded enthusiastically, with SoundHoundâs stock climbing as much as 26.4% during Fridayâs trading session, closing up 20% by mid-afternoon. Despite the surge, some analysts remain cautious, noting that SoundHound was not included in The Motley Foolâs list of the top 10 stocks to buy, indicating mixed sentiment about its long-term valuation. However, the companyâs debt-free balance sheet and accelerating revenue growth have bolstered optimism among investors, particularly as short sellers, who once held a 33% interest in the stock, have begun covering their positions. Looking Ahead SoundHoundâs focus on voice-centric AI and its recent acquisition of Amelia, a platform for creating industry-specific AI applications, positions it for further expansion. Analysts from HC Wainwright and Wedbush have adjusted price targets but still see a potential 50% to 65% upside, though they caution that the stock may face resistance near $11.90. With no debt and a cash position of $230 million, SoundHound is well-capitalized to sustain its growth trajectory, potentially reaching profitability within the next two to three years. As SoundHound continues to capitalize on the AI boom, its record-breaking Q2 performance underscores its potential to become a major player in the conversational AI space. Investors will be watching closely for the companyâs next earnings report in November to see if this momentum carries forward.

Two of this centuryâs breakthrough technologies are on a collision course. Investors in Bitcoin BTCUSD -0.05% should pay attention. Experts say that ultrapowerful quantum computers could eventually crack the security codes of blockchain, the underlying technology for Bitcoin. That would be a hackerâs dream. And it could deal a severe blow to investorsâ trust in the $2 trillion-plus market for the leading cryptocurrency. Roughly a quarter of all Bitcoins are now protected with algorithms that could be cracked by quantum computers in five or 10 years, Gartner analyst Avivah Litan tells Barronâs. Those are mostly older Bitcoins housed in digital vaults, or wallets, that date back as far as 15 years. Advertisement - Scroll to Continue As quantum computing keeps advancing, the damage could spread to newer wallets, and then to the marketâs broader structure. The computers âmight eventually become so fast that they will undermine the Bitcoin transaction process,â experts at Deloitte have written. Conceivably, hackers could start rewriting the history of trades. The crypto industry knows about these risks and is quietly preparing to defend itself. âThere are very strong incentives to protect the value in Bitcoinâs network and drive the development of quantum-resistant technology,â Litan says. Ultimately, the industryâs best weapon for the fight could prove to be quantum computing itself. Some firms are already working on that. The big, unanswerable question is how quickly quantum develops. That is, how soon might the security features of blockchain meet their match? Will the industry finish its preparations in time? Advertisement - Scroll to Continue Of the two technologies, blockchain is the easier to understand. It is essentially a digital record-keeping system consisting of âblocks,â each containing details on validated transactions. Each time an entry is created and authenticated, a block is added. It is the beating heart of the Bitcoin market. The concept of a blockchain has existed since the 1990s, when computer scientists Stuart Haber and W. Scott Stornetta proposed the first system to timestamp data using cryptography. In October 2008, a mysterious, faceless developer (or developers) going by the name Satoshi Nakamoto published a white paper detailing a âpeer-to-peer electronic cash systemâ that would become the prototype for the blockchain network. As it happens, Nakamotoâs holdingsâwhich the most bombastic estimates place at 1.1 million Bitcoins, or some $128 billionâcould be vulnerable to the first wave of any quantum-based attacks. Thatâs because the assets are believed to have been tucked away since 2010 in the kind of older, digital vaults considered to be most at risk. Quantum computing, under development since the 1980s, is derived from quantum mechanics. And what is that, exactly? The pioneering physicist Richard Feynman may have put it best: âI think I can safely say that nobody understands quantum mechanics.â The remark, part of a lecture at Cornell University in 1964, drew chuckles from the audience, but the sentiment still rings true today, even as hype about the technology explodes on Wall Street. A collection of small, volatile quantum-computing stocks have become some of investorsâ favorite speculative playthings. In general, quantum computing aims to take traditional computing to an entirely new level. It seeks to solve big, complex statistical problems by examining large numbers of variables at the same time. Advertisement - Scroll to Continue A typical quantum system consists of a bulky, refrigerator-like shell encasing a nest of hardware. At its core sits a quantum processor, usually no bigger than a thumbnail. Information is encoded by quantum bits, or âqubits,â which are created by manipulating and measuring subatomic particles like electrons, photons, and ions. Because qubits allow these particles to exist in multiple states at once, quantum computers can perform calculations outside the reach of traditional machines. Theoretically, they can be used for everything from unsnarling a cityâs traffic jams to discovering new treatments for cancer. And for cracking cryptographic algorithms. âThat is one of the cases where the features of quantum mechanics are used to do things that are very hard or too time-intensiveâand basically impossibleâotherwise,â says Thomas Ehmer, co-founder of the Quantum Interest Group at Merck KGaA. To attackers, itâs the âholy grail,â Ehmer says. Quantum computers, he adds, could work in a âhyper-efficientâ way to peel away the layers of numbers that form the core of blockchain encryption. For most cryptocurrencies, that core is based on pairs of keysâa public key and a private key, which are mathematically linked. The public key is used for encryption, or scrambling data to safeguard it from prying eyes, while the private key is used for decryption, or converting it back into a readable format. Think of a public key like an email address or a username. Anyone can view and share it, and anyone can use it to encrypt data. However, only the holders of the corresponding private key can decrypt the data. The security of encryption relies on the difficulty of factoring large numbers, or breaking a number into smaller prime numbers that, when multiplied together, equal the larger number. Current technology is unable to do that, but a fully realized quantum computer theoretically could, and in surprisingly short time. Advertisement - Scroll to Continue âItâs like having a superpower that lets you quickly pick a lock that would take a normal person millions of years to even attempt,â Ehmer says. There have already been attempts to crack the code. In a 2024 paper, Chinese scientists claimed they had used a system from D-Wave Quantum to break RSA encryptions, which are used in online banking transactions and VPN connections. The experiment, however, was conducted on a relatively small scale and wasnât considered a major advance. Still, fear is clearly seeping into the crypto industryâs consciousness. BlackRock, the worldâs largest asset manager, warned of the advent of more powerful computers when it prepared to launch a Bitcoin exchange-traded fund in 2024. The firm noted in a filing with the Securities and Exchange Commission that âquantum computing could result in the cryptography underlying the Bitcoin network becoming ineffective, which, if realized, could compromise the security of the Bitcoin networkâ and lead to losses for shareholders. Similar language appears in BlackRockâs BLK +1.09% filings as far back as 2023. Just how real is the risk? Some three-quarters of Bitcoins have an additional layer of cryptography that keeps them out of imminent danger. However, the threat is nothing to scoff at, according to Michael Osborne, chief technology officer at IBM Quantum Safe. âAssets can be stolen from existing wallets if fairly simple actions are not taken to protect them,â Osborne says. The most immediate fix may be to move funds from old or reused addresses to new wallets that donât have their public keys exposed, in anticipation of the day quantum computers gain the ability to determine a private key using a public key. As quantum develops in the coming years, protective measures may well become harder to devise. Hackers could gain the tools to disrupt Bitcoin mining and the basic operations of the market, such as rewriting transaction history. Gartnerâs Litan says that some experts place the odds of this happening at 50% by 2037. Advertisement - Scroll to Continue âThere is strong consensus in the Bitcoin community that preparation now is essential to prevent future catastrophe, though some view the threat as overhyped,â Litan says. No matter the difference of opinions, developers arenât sitting idly by. Rather, they have kicked off a digital arms race even before the true conflict has begun. âItâs pretty widely known that the bad actors will try to use quantum computers to break classical encryption,â Quantinuum CEO Rajeeb Hazra tells Barronâs. âBut that same tool can also be used to create better algorithms.â The child of Honeywell Quantum Solutions and a United Kingdomâbased start-up, Quantinuum was created through a merger in 2021. The firm received an initial investment of $300 million from Honeywell International HON -0.12% , and released its first productâa random-number generator with cybersecurity applicationsâin December of that year. In March 2025, Quantinuum teamed up with researchers at JPMorgan Chase for an experiment demonstrating how a quantum computer could best a classical machine at a random-number-generation problem. As random numbers are used in everything from computer simulations to cryptography, the study had important real-world implications. âForever the race will remain, right?â Hazra says with a chuckle. âWe see it in the classical world, and weâll see it taken to the next level with quantum.â Advertisement - Scroll to Continue Researchers at D-Wave Quantum QBTS -1.57% have approached the challenge by developing a blockchain architecture that runs on quantum computers. âThe distributed nature of the Bitcoin network is based on a bunch of miners collaborating and each doing a hard cryptographic puzzle, which requires a lot of classical computational power,â explains Trevor Lanting, D-Waveâs chief development officer. Newsletter Sign-up Barron's Tech Our weekly newsletter covers the hottest topics in technology from chips and computing to gaming and streaming TVâall with a lens on investing. Written by Barronâs Tae Kim. Preview Subscribe Blockchains rely on hashing, a mathematical function that acts like a digital fingerprint by converting an input into a string of characters. Hashing is used to encrypt transactions, and âproof of workâ algorithms validate those transactions. D-Wave aims to replace this process with a quantum proof of work, which the company describes as a new way to securely and efficiently create hashes. In a preprint submitted to research-sharing platform arXiv in March, scientists showed how they had tested a prototype blockchain on four D-Wave processors scattered across North America, âdemonstrating stable operation across hundreds of thousands of quantum hashing operations.â The race is far from over. Insights from IBM suggest cryptographically relevant quantum computers could arrive in a decade, while some organizations anticipate it may take up to 12 years to become quantum-resistant. Thereâs an expression in the crypto community that might be appropriate here: âHODL,â or hold on for dear life.
Meta distilled its full-body Codec Avatars tech to render 3 at once on Quest 3 standalone, with some notable tradeoffs. For around a decade now, Meta has been researching and developing the technology it calls Codec Avatars, photorealistic digital representations of humans driven in real-time by the face and eye tracking of VR headsets. The highest-quality prototype achieves the remarkable feat of crossing the uncanny valley, in our experience. The goal of Codec Avatars is to deliver social presence, the subconscious feeling that you're truly with another person, despite them not physically being there. No flatscreen technology can do this. Video calls don't even come close. To eventually ship Codec Avatars, Meta has been working on increasing the system's realism and adaptability, reducing the real-time rendering requirements, and making it possible to generate them with a smartphone scan. For example, last week we reported on Meta's latest progress on highly realistic head-only Codec Avatars that can be generated from a selfie video of you rotating your head, plus around an hour of processing on a server GPU. This has become possible thanks to Gaussian splatting, which in recent years has done for realistic volumetric rendering what large language models (LLMs) did for chatbots. Metaâs Photorealistic âCodec Avatarsâ Now Have Changeable Hairstyles David Heaney But that system was still designed to run on a powerful PC graphics card. Now, Meta researchers have figured out how to get their full-body Codec Avatars running in real-time on Quest 3. In a paper called "SqueezeMe: Mobile-Ready Distillation of Gaussian Full-Body Avatars", the researchers describe how they distilled their full-body photorealistic avatars to run on a mobile chipset, leveraging both the NPU and GPU. You may have heard the term distillation in the context of LLMs, or AI in general. It refers to using the output of a large, computationally expensive model to train a much smaller model. The idea is that the small model can replicate the larger model efficiently, with minimal quality loss. The researchers say SqueezeMe can render 3 full-body avatars at 72 FPS on a Quest 3, with almost no quality loss compared to the versions rendered on a PC. However, there are a couple of key tradeoffs to note. These avatars are generated using the traditional massive custom capture array of more than 100 cameras and hundreds of lights, not the new 'universal model' smartphone-scan approach of Meta's other recent Codec Avatars research. They also have flat lighting, and do not support dynamic relighting. This support is a flagship feature of Meta's latest PC-based Codec Avatars, and would be crucial for making them fit into VR environments and mixed reality. Still, this research is a promising step towards Meta eventually shipping Codec Avatars as an actual feature of its Horizon OS headsets. Public pressure for Meta to ship what it has been researching for a decade has built up significantly this year as Apple is shipping its new Personas in visionOS 26, effectively delivering on Meta's promise. However, neither Quest 3 nor Quest 3S have eye tracking or face tracking, and there's no indication that Meta plans to imminently launch another headset with these capabilities. Quest Pro had both, but was discontinued at the start of this year. Meta Connect 2025 Takes Place September 17 & 18 David Heaney One possibility is that Meta launches a rudimentary flatscreen version of Codec Avatars with AI simulated face tracking first, to let you join WhatsApp and Messenger video calls with a more realistic form than your Meta Avatar. Meta Connect 2025 will take place from September 17, and the company might share more about its progress on Codec Avatars then.
Showing 20 of 121 articles
More content available in the full portfolio