

The U.S. State Department has announced a sweeping freeze on immigrant visas for citizens of 75 countries, citing concerns that these individuals may become reliant on public assistance. This action,
London - In a welcome reprieve for the United Kingdom, the nation's economy demonstrated unexpected resilience in November, posting a 0.3% growth, according to data released by the Office for National
Gold prices shattered records on Monday, surging past $4,600 an ounce for the first time ever as a confluence of geopolitical tensions and domestic policy uncertainty fueled a rush to safe-haven asset
European leaders have united in a show of solidarity with Greenland, rebuking U.S. President Donald Trump's renewed interest in acquiring the Arctic island. In a joint statement released Tuesday, the
India's delicate economic dance continues as consumer inflation edged up to 1.33% in December, a subtle shift from the previous month's 0.71%. While seemingly a minor adjustment, this upward creep in
Apple is breaking its silence on artificial intelligence, partnering with Google to supercharge its AI capabilities, most notably a significant upgrade to Siri slated for later this year. The multiyea
The stock price of Hilton Hotels dipped slightly on Monday after the Department of Homeland Security (DHS) publicly criticized the hotel chain, alleging that a Minneapolis location abruptly canceled r

While there are no major reports of widespread election fraud directly impacting recent Minnesota election outcomes as of late 2025, several incidents and policy debates have drawn attention, particularly amid broader fraud scandals in state social services programs. Key Recent Cases of Detected and Prosecuted Voter Fraud 2025 Voter Registration Fraud Scheme: Two Nevada residents (formerly of Minnesota) were charged federally with conspiracy to submit hundreds of fraudulent voter registration applications across multiple counties in 2021–2022. One pleaded guilty in July 2025. The scheme was uncovered by local election officials (starting in Carver County), and no fraudulent ballots were cast or counted. Minnesota Secretary of State Steve Simon emphasized that this case demonstrates the effectiveness of the state's safeguards, as fraudulent applications were flagged immediately. Individual Incidents: Isolated cases include a woman sentenced in October 2025 for attempting to cast an absentee ballot (flagged and not counted), and older cases like an election judge charged in 2024 for improperly allowing unregistered voters. These cases are rare and were prevented from affecting results, according to official statements and reports from sources like the Associated Press and the Minnesota Secretary of State's office. Policy Scrutiny Amid Broader Fraud Concerns A December 29, 2025, Fox News article highlighted Minnesota's long-standing "vouching" policy, which allows a registered voter to vouch for the residency of up to eight others on same-day registration (no ID required for the vouched voters in some cases). Critics, including conservatives like Scott Presler and Sen. Mike Lee, argue this creates potential for abuse, especially given same-day registration and recent social services fraud scandals involving Minnesota's Somali community (e.g., hundreds of millions in alleged misuse of federal funds). However, state officials note the policy has existed for over 50 years with no evidence of systemic exploitation leading to fraudulent votes. This debate has intensified due to unrelated but high-profile welfare and childcare fraud investigations (e.g., Feeding Our Future scandal), which some Republicans link to election integrity concerns, though no direct connection to voting has been substantiated. Historical Context Older allegations, such as 2020 ballot harvesting claims tied to Rep. Ilhan Omar (promoted by Project Veritas), were largely debunked or lacked corroboration, with sources retracting statements. In summary, proven voter fraud in Minnesota remains extremely limited and detected/prevented by existing systems. Widespread claims often stem from policy critiques or conflation with non-election fraud scandals.

Simply ask Photos to make the edits you want and watch the changes appear. Plus, we’re making it easier to see if an image was edited using AI with C2PA Content Credentials. S Selena Shang Senior Product Manager, Google Photos Read AI-generated summary Share We’re making it unbelievably easy to quickly edit your images in Google Photos ― just ask Photos to edit your pictures for you. Coming first to Pixel 10 in the U.S., you can simply describe the edits you want to make by text or voice in Photos’ editor, and watch the changes appear. And to further improve transparency around AI edits, we’re adding support for C2PA Content Credentials in Google Photos. Edit by simply asking Our recently redesigned photo editor already makes editing quick and easy for anyone — regardless of your editing expertise — by providing AI-powered suggestions that combine multiple effects for quick edits and putting all our powerful editing tools in one place. You can also simply tap or circle parts of an image right when you open the editor and get suggestions for editing that specific area, like erasing a distraction. Today, we’re introducing conversational editing capabilities in the redesigned photo editor, so you’ll have more ways to make stunning edits, including simple gestures, one-tap suggestions and now, natural language. Thanks to advanced Gemini capabilities, Photos can now help you make custom AI-powered edits that bring your vision to life in just seconds. No need to select tools or adjust sliders. All you have to do is ask Photos for the edits you want to see. Because this is an open-ended, conversational experience, you don’t have to indicate which tools you want to use. For example, you could ask for a specific edit, like “remove the cars in the background” or something more general like “restore this old photo” and Photos will understand the changes you’re trying to make. You can even make multiple requests in a single prompt like “remove the reflections and fix the washed out colors.” And if you truly have no idea where to start, you can just start by typing or saying, “make it better” or using one of the provided suggestions. Then if you want to make tweaks, you can add follow-up instructions after each edit to fine-tune your image and get it looking just right. Beyond corrective edits like lighting and removing distractions, you can ask for more creative help. For example, you could change the background of your image, add fun items like a party hat or sunglasses to the main subject and so much more. Without having to worry about choosing which tools to use and how they’ll work together, the possibilities are wide open when it comes to editing — all you have to do is tell Photos what you want to see, from simple tweaks to complex edits. See how your images were made for added transparency Pixel 10 devices will be the first to implement industry-standard C2PA Content Credentials within the native camera app, across photos created by Pixel Camera, with and without AI. To further improve transparency around how images are made, we’re adding support for C2PA Content Credentials in Google Photos — in addition to the existing support for IPTC metadata for AI-edited images and SynthID for images edited with Reimagine. Image showing how C2PA Content Credentials are shown in the Google Photos app. Available first for Pixel 10 — and rolling out gradually on Android and iOS devices over the coming weeks — you’ll now be able to see information right in Google Photos indicating how an image was captured or edited based on C2PA Content Credentials. Gemini models support so many creative and useful features in Google Photos — from search to editing. We’ll continue to explore how to continue to use them to bring you new, helpful ways to use the app.
Most people not deeply involved in the artificial intelligence frenzy may not have noticed, but perceptions of AI’s relentless march toward becoming more intelligent than humans, even becoming a threat to humanity, came to a screeching halt Aug. 7. That was the day when the most widely followed AI company, OpenAI, released GPT-5, an advanced product that the firm had long promised would put competitors to shame and launch a new revolution in this purportedly revolutionary technology. As it happened, GPT-5 was a bust. It turned out to be less user-friendly and in many ways less capable than its predecessors in OpenAI’s arsenal. It made the same sort of risible errors in answering users’ prompts, was no better in math (or even worse), and not at all the advance that OpenAI and its chief executive, Sam Altman, had been talking up. Advertisement AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped. — Alex Hanna, co-author, “The AI Con” “The thought was that this growth would be exponential,” says Alex Hanna, a technology critic and co-author (with Emily M. Bender of the University of Washington) of the indispensable new book “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.” “Instead, Hanna says, “We’re hitting a wall.” The consequences go beyond how so many business leaders and ordinary Americans have been led to expect, even fear, the penetration of AI into our lives. Hundreds of billions of dollars have been invested by venture capitalists and major corporations such as Google, Amazon and Microsoft in OpenAI and its multitude of fellow AI labs, even though none of the AI labs has turned a profit. Advertisement Newsletter Get the latest from Michael Hiltzik Commentary on economics and more from a Pulitzer Prize winner. Enter email address Enter email address Sign Me Up You may occasionally receive promotional content from the Los Angeles Times. Public companies have scurried to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices, much as an earlier generation of businesses promoted themselves as “dot-coms” in the 1990s to look more glittery in investors’ eyes. Nvidia, the maker of a high-powered chip powering AI research, plays almost the same role as a stock market leader that Intel Corp., another chip-maker, played in the 1990s — helping to prop up the bull market in equities. ADVERTISING If the promise of AI turns out to be as much of a mirage as dot-coms did, stock investors may face a painful reckoning. File - Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. The board of ChatGPT-maker Open AI says it has pushed out Altman, its co-founder and CEO, and replaced him with an interim CEO(AP Photo/Eric Risberg, File) Voices Hiltzik: AI ‘hallucinations’ are a growing problem for the legal profession May 22, 2025 The cheerless rollout of GPT-5 could bring the day of reckoning closer. “AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped,” Hanna told me. The rollout was so disappointing that it shined a spotlight on the degree that the whole AI industry has been dependent on hype. Here’s Altman, speaking just before the unveiling of GPT-5, comparing it with its immediate predecessor, GPT-4o: “GPT-4o maybe it was like talking to a college student,” he said. “With GPT-5 now it’s like talking to an expert — a legitimate PhD-level expert in anything any area you need on demand ... whatever your goals are.” Well, not so much. When one user asked it to produce a map of the U.S. with all the states labeled, GPT-5 extruded a fantasyland, including states such as Tonnessee, Mississipo and West Wigina. Another prompted the model for a list of the first 12 presidents, with names and pictures. It only came up with nine, including presidents Gearge Washington, John Quincy Adama and Thomason Jefferson. Experienced users of the new version’s predecessor models were appalled, not least by OpenAI’s decision to shut down access to its older versions and force users to rely on the new one. “GPT5 is horrible,” wrote a user on Reddit. “Short replies that are insufficient, more obnoxious ai stylized talking, less ‘personality’ … and we don’t have the option to just use other models.” (OpenAI quickly relented, reopening access to the older versions.) The tech media was also unimpressed. “A bit of a dud,” judged the website Futurism and Ars Technica termed the rollout “a big mess.” I asked OpenAI to comment on the dismal public reaction to GPT-5, but didn’t hear back. None of this means that the hype machine underpinning most public expectations of AI has taken a breather. Rather, it remains in overdrive. A projection of AI’s development over the coming years published by something called the AI Futures Project under the title “AI 2027” states: “We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.” FILE - Jensen Huang, chief executive officer of Nvidia, speaks at SIGGRAPH 2024, in the Colorado Convention Center on July 29, 2024, in Denver. (AP Photo/David Zalubowski, File) Voices Hiltzik: The air begins to leak out of the overinflated AI bubble Sept. 8, 2024 The rest of the document, mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply. One problem underscored by GPT-5’s underwhelming rollout is that it exploded one of the most cherished principles of the AI world, which is that “scaling up” — endowing the technology with more computing power and more data — would bring the grail of artificial general intelligence, or AGI, ever closer to reality. That’s the principle undergirding the AI industry’s vast expenditures on data centers and high-performance chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would outstrip the capacity of the global credit and derivative securities markets. But if AI won’t scale up, most if not all that money will be wasted. As Bender and Hanna point out in their book, AI promoters have kept investors and followers enthralled by relying on a vague public understanding of the term “intelligence.” AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition. “So we’re imagining a mind behind the words,” Hanna says, “and that becomes associated with consciousness or intelligence. But the notion of general intelligence is not really well-defined.” Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy. “What I had not realized,” Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.” The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT. Voices Hiltzik: This AI chatbot was ‘trained’ using my books, but don’t blame me for its incredible stupidity Oct. 5, 2023 That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.” The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset. Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on. Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections. The value of Bender’s and Hanna’s book, and the lesson of GPT-5, is that they remind us that “artificial intelligence” isn’t a scientific term or an engineering term. It’s a marketing term. And that’s true of all the chatter about AI eventually taking over the world. “Claims around consciousness and sentience are a tactic to sell you on AI,” Bender and Hanna write. So, too, is the talk about the billions, or trillions, to be made in AI. As with any technology, the profits will go to a small cadre, while the rest of us pay the price ... unless we gain a much clearer perception of what AI is, and more importantly, what it isn’t.

Spotify has launched its new Mix With Spotify feature, allowing users to seamlessly blend songs within a playlist like a professional DJ. Offering controls like echo, volume automation, EQ, and low and high pass filters, the new feature launches in beta on Tuesday (Aug. 19) to Spotify Premium users on the app. Related DJ Fan Remixing Is the Latest Music Tech Trend — Could Licensing Stop Its Growth? Kristin Robinson News of Spotify’s new feature comes amid a growing interest in customizable, DJ-like music products. In March 2025, competitor Apple Music launched its DJ With Apple Music feature, allowing users to integrate songs from the streaming service into platforms like AlphaTheta, Serato, and inMusic’s Engine DJ, Denon DJ, Numark and RANE DJ — all commonly used by DJs during sets. Start-ups like Hook and Mash-App have also recently debuted, offering users the ability to mash up, speed up and slow down songs in their library. In February, Bloomberg reported that Spotify was working toward a new superfan service that will include “remixing tools” along with high fidelity audio, concert tickets and more. It’s unclear if Mix With Spotify is what Bloomberg was referencing, but it’s certainly a step toward Spotify integrating more playful, customizable features. As Bob Moczydlowsky, Techstars managing director, predicted to Billboard in 2023: “If streaming 1.0 was about making all the music play, Streaming 2.0 should be about being able to play with all the music.” Trending on Billboard The video player is currently playing an ad. To start using Mix With Spotify, users can select between its Custom or Auto mix options, allowing users to have as much or as little control over their transitions as they want. Friends can also collaborate on their mixes in shared playlists, allowing everyone to edit transitions together. When a user starts the mixing process, Spotify will automatically show the key and BPM of each track in the playlist so that users can scan and reorder the playlist to ensure the best flow between songs. After the transitions are made, Spotify also offers a feature to customize the playlist cover art with new stickers and labels that are available only for mixed playlists.


Most of the internet is out of your reach, but the barrier isn't just algorithms. In another language, the same platforms turn into whole other worlds. When you go online, it feels like you're accessing all the world's information. But you form social media relationships based on shared language. You search Google with the language you think in. And algorithms built to maximise attention have no reason to recommend what you won't understand. So, most of the internet remains out of sight, on the other side of a language filter – and you're missing far more than content. Most internet activity is concentrated on a small number of large platforms, and from our linguistically siloed perspectives, it's easy to assume that everyone uses them in similar ways. But why should that be true? We expect music, literature and cuisine to vary between cultures, after all, so why not the internet? In a new paper, our team at the University of Massachusetts Amherst's Initiative for Digital Public Infrastructure has uncovered stark differences in how different cultures harness the internet. With more research, it may reshape how we think about the services that dominate the web. We're only just beginning to understand the implications. We may be seeing a different kind of attention economy, less about mass reach, more about small, meaningful engagement. It may be a sign of something more intimate, and perhaps even more human The history of the internet offers some examples. Take the Russian social media/blogging platform LiveJournal. When it was popular in the mid-2000s, English-speaking users knew it as a space for young people to share their feelings or geek out about Harry Potter. But if you're a Russian speaker, you probably know LiveJournal very differently – as an important site of public intellectualism and political discourse, playing a rare role in hosting voices from the opposition. With the biggest technology companies based in the US, a cultural blind spot has emerged where we often assume that the English internet is representative of the rest of the world. Research about YouTube in particular has a significant English-speaking bias – typically written in English, published in English-speaking countries and focused on English-language videos. Comment & analysis Ryan McGrady is a senior research fellow at the University of Massachusetts Amherst's Initiative for Digital Public Infrastructure. The internet's leading platforms are more difficult to study than you might think. Computers can blaze through text, but video is harder to parse at scale. Platforms like YouTube, the world's most popular video service, don't offer tools to create the large representative samples necessary to understand the platform as a whole, or big swaths of it like linguistic communities. As a result, YouTube is often understood through the easily accessible tip of the iceberg: its most popular videos. Between the language bias and this popularity bias, when users, creators, academics, educators, parents, teachers and even policymakers talk about platforms like YouTube, we're typically just talking about the part that's most visible to us – a small, unrepresentative piece of it. (For more, read Thomas Germain's story on the hidden world beneath the shadows of YouTube's algorithm.) So, how do you study what's under the surface? A couple years ago, we came up with a way to do what YouTube's tools couldn't: we randomly guessed the URLs of videos – more than 18 trillion times – until we had enough videos to paint a picture of what's really happening on YouTube. What we put together was a first-time look at the inner workings of one of the most influential websites on earth. With a large enough representative sample, we could begin making broader comparisons. How do videos uploaded in 2019 compare to videos uploaded in 2021? Do videos of animals get more comments than videos of sports? What kinds of things can we see when we compare popular videos to those with just a handful of views? Getty Images Radical differences in cultural norms point to a brand-new understanding of what's happening online (Credit: Getty Images)Getty Images Radical differences in cultural norms point to a brand-new understanding of what's happening online (Credit: Getty Images) Most of all, we wanted to explore linguistic differences: how language and culture shape online participation at a global scale. So, in 2024 we examined language-specific samples of English, Hindi, Russian and Spanish YouTube, working with native speakers to validate our language detection tools. Our goal was to take a high-level view of YouTube in each language to look for broad patterns. We had to acknowledge that YouTube might be just as simple as many people assume: more or less the same across languages. But that's not what we found. Each language varies in multiple dimensions, but one corner of the platform stood out. In short, Hindi YouTube is radically different from its counterparts. It seems like Hindi users are relating to each other with rhythms and dynamics we didn't see in any other block, and buried in the numbers, we could see the story of major geopolitical conflict. Let's start with growth. The chart below shows how much of each language was uploaded per year from 2014 to 2023. All four are growing rapidly, but more than half of all Hindi YouTube videos were uploaded in 2023 alone. University of Massachusetts at Amherst The growth of YouTube videos in different languages shows a splintering in the paths of cultural evolution (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The growth of YouTube videos in different languages shows a splintering in the paths of cultural evolution (Credit: University of Massachusetts at Amherst) Then there's length. Spanish videos are a little longer than the rest, with a median of about two-and-a-half minutes. English isn't far behind at nearly two minutes and Russian at one minute 38 seconds. But the median Hindi YouTube video is just 29 seconds long. These details might sound like interesting quirks – but they're actually a reflection of India's internet history. TikTok was incredibly popular in India, long before the app exploded in the US and Europe, but that all changed after India banned the app amid border clashes with China in 2020. Overnight, hundreds of millions of users were cut off from their videos, comments, businesses and self-expression. YouTube rushed in to fill the void, making India the first market for YouTube Shorts, a feature the company built to highlight the short-form vertical video format that made TikTok famous. It looks to have been successful. More than half of Hindi YouTube – 58% – is made up of Shorts, compared to just 25-31% for the other languages. In many countries, Shorts is just a TikTok clone, but it's become a much larger ecosystem in India. The influence of TikTok and Shorts shows up in other ways, too. The next chart focuses on videos 30 seconds and less, showing what portion of each language's videos are one second long, two seconds long, etc. There is a spike across all languages (though particularly extreme in Hindi) at 15 seconds, a default length for TikTok, then adopted as a default for Shorts. University of Massachusetts at Amherst The rise of TikTok seems to have inspired a spike in 15 second videos, but the differences are dramatic when you compare languages (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The rise of TikTok seems to have inspired a spike in 15 second videos, but the differences are dramatic when you compare languages (Credit: University of Massachusetts at Amherst) Terms like "median duration by language" may seem dry, but here, they hint at a sea change in the way people use video in many parts of the world. Next, we found a telling difference in how people described their own videos. YouTube asks people to categorise their videos. Most users don't bother to change the default, People & Blogs. But when we excluded that, the differences between languages grew sharper. You can see this in the last chart below. In Russian, gaming videos dominate. It's the most popular category in English and Spanish, too. But in Hindi, Entertainment and Education are on top. And for all the attention English-language political content gets in the popular discourse, English has the smallest number of videos in the "News and Politics" category. These category labels are more than metadata. They're a look at how different cultures use the platform for different purposes. What we're seeing is parallel internets shaped by local needs, expectations and norms. But this data suggests something different: people in different linguistic communities aren't just making different videos and engaging with them differently, they may be using YouTube for completely different reasons. Finally, we looked at popularity metrics – views, likes and comments – and once again, Hindi YouTube was an outlier. It demonstrated extreme inequality. Just 0.1% of Hindi videos accounted for 79% of views (the other languages ranged from 54% to 59%). But there's an interesting twist. Those less popular videos were far more likely to have likes. That suggests something deeper. On Hindi YouTube, even the videos that aren't being seen are being appreciated and acknowledged. Our new research suggests YouTube in India may often be used like a video messaging service to talk to friends and family, with public videos often intended for a private audience. University of Massachusetts at Amherst The categories linguistic groups use to tag their videos suggests people use YouTube in meaningfully different ways (Credit: University of Massachusetts at Amherst)University of Massachusetts at Amherst The categories linguistic groups use to tag their videos suggests people use YouTube in meaningfully different ways (Credit: University of Massachusetts at Amherst) We think some of these differences can be explained by how the internet has been adopted in India, and the country's TikTok inheritance. This may be a different kind of attention economy, less about mass reach, more about small, meaningful engagement. It may be a sign of something more intimate, and perhaps even more human. More like this: • How Google trained you to stop clicking • The YouTube statistics Google doesn't want you to see • Is Google about to destroy the web? We still have a lot of work to do, and a lot of videos to watch, before we can make these claims definitively. But what's already clear is that language doesn't just shape your view of digital life – it can obscure the diverse, culturally specific ways people use these platforms. We're building businesses, journalism and regulation on an artificially limited view of the internet, one often filtered through English, popularity and convenience. It's time we looked deeper.

Caltech scientists have created a hybrid quantum memory that converts electrical information into sound, allowing quantum states to last 30 times longer than in standard superconducting systems. Their mechanical oscillator, like a microscopic tuning fork, could pave the way for scalable and reliable quantum storage. Quantum Bits vs. Classical Bits While traditional computers rely on bits, the basic units of information that can only be 0 or 1, quantum computers operate with qubits. Unlike ordinary bits, qubits can exist as both 0 and 1 at the same time. This unusual behavior, a quantum physics effect called superposition, is what gives quantum computing its extraordinary potential to solve problems that are far beyond the reach of conventional machines. Most quantum computers today are built using superconducting electronic systems, where electrons move without resistance at extremely low temperatures. Within these systems, carefully engineered resonators allow electrons to form superconducting qubits. These qubits excel at carrying out fast, complex operations, but they are not well-suited for long-term storage. Preserving information in the form of quantum states (mathematical descriptions of specific quantum systems) remains a major challenge. To address this, researchers have been working on creating “quantum memories” that can hold quantum information far longer than standard superconducting qubits. Using Sound to Remember Quantum Information A scanning electron microscope image highlighting a single mechanical oscillator, “tuning fork,” from the new work. The false-colored golden lines in the image indicate the location of electrodes that transfer electrical signals between the superconducting qubit and the mechanical oscillator. Credit: Omid Golami Extending Quantum Memory with Sound A team at Caltech has now developed a new hybrid method to extend quantum memory. By converting electrical signals into sound, they enabled quantum states from superconducting qubits to remain stable for up to 30 times longer than with earlier approaches. The research, led by graduate students Alkim Bozkurt and Omid Golami under the supervision of Mohammad Mirhosseini, assistant professor of electrical engineering and applied physics, was published in Nature Physics. “Once you have a quantum state, you might not want to do anything with it immediately,” Mirhosseini says. “You need to have a way to come back to it when you do want to do a logical operation. For that, you need a quantum memory.” Harnessing Sound for Quantum Storage Previously, Mirhosseini’s group showed that sound, specifically phonons, which are individual particles of vibration (in the way that photons are individual particles of light) could provide a convenient method for storing quantum information. The devices they tested in classical experiments seemed ideal for pairing with superconducting qubits because they worked at the same extremely high gigahertz frequencies (humans hear at hertz and kilohertz frequencies that are at least a million times slower). They also performed well at the low temperatures needed to preserve quantum states with superconducting qubits and had long lifetimes. Now Mirhosseini and his colleagues have fabricated a superconducting qubit on a chip and connected it to a tiny device that scientists call a mechanical oscillator. Essentially a miniature tuning fork, the oscillator consists of flexible plates that are vibrated by sound waves at gigahertz frequencies. When an electric charge is placed on those plates, the plates can interact with electrical signals carrying quantum information. This allows information to be piped into the device for storage as a “memory” and be piped out, or “remembered,” later. Storage Times Far Exceed Expectations The researchers carefully measured how long it took for the oscillator to lose its valuable quantum content once information entered the device. “It turns out that these oscillators have a lifetime about 30 times longer than the best superconducting qubits out there,” Mirhosseini says. This method of constructing a quantum memory offers several advantages over previous strategies. Acoustic waves travel much slower than electromagnetic waves, enabling much more compact devices. Moreover, mechanical vibrations, unlike electromagnetic waves, do not propagate in free space, which means that energy does not leak out of the system. This allows for extended storage times and mitigates undesirable energy exchange between nearby devices. These advantages point to the possibility that many such tuning forks could be included in a single chip, providing a potentially scalable way of making quantum memories. The Path Forward Mirhosseini says this work has demonstrated the minimum amount of interaction between electromagnetic and acoustic waves needed to probe the value of this hybrid system for use as a memory element. “For this platform to be truly useful for quantum computing, you need to be able to put quantum data in the system and take it out much faster. And that means that we have to find ways of increasing the interaction rate by a factor of three to 10 beyond what our current system is capable of,” Mirhosseini says. Luckily, his group has ideas about how that can be done. Reference: “A mechanical quantum memory for microwave photons” by Alkım B. Bozkurt, Omid Golami, Yue Yu, Hao Tian and Mohammad Mirhosseini, 13 August 2025, Nature Physics. DOI: 10.1038/s41567-025-02975-w Additional authors of the paper are Yue Yu, a former visiting undergraduate student in the Mirhosseini lab; and Hao Tian, an Institute for Quantum Information and Matter postdoctoral scholar research associate in electrical engineering at Caltech. The work was supported by funding from the Air Force Office of Scientific Research and the National Science Foundation. Bozkurt was supported by an Eddleman Graduate Fellowship.

The discovery could challenge current ideas about how galaxies formed in the early universe. Comments (5) When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. A series of red bubble looking spheres over a dark, starry background with four white cutout squares in the front enlarging four of the bubbles to show glowing balls of red light in each of the bubbles. Hundreds of unusually bright early galaxy candidates have been identified in deep-field images from NASA's James Webb Space Telescope. (Image credit: Bangzheng "Tom" Sun) Hundreds of unexpectedly energetic objects have been discovered throughout the distant universe, possibly hinting that the cosmos was far more active during its infancy than astronomers once believed. Using deep-field images from NASA's James Webb Space Telescope (JWST), researchers at the University of Missouri identified 300 unusually bright objects in the early universe. While they could be galaxies, astronomers aren't yet sure what they are for certain. Galaxies forming so soon after the Big Bang should be faint, limited by the pace at which they could form stars. Yet these candidates shine far brighter than current models of early galaxy formation predict. "If even a few of these objects turn out to be what we think they are, our discovery could challenge current ideas about how galaxies formed in the early universe — the period when the first stars and galaxies began to take shape," Haojing Yan, co-author of the study, said in a statement from the university. You may like White diamond symbols mark the locations of 20 of the 83 newfound young, low-mass galaxies undergoing intense star formation when the universe was just 800 million years old. Despite their small sizes, these galaxies are powerful sources of ultraviolet radiation, making them strong candidates for helping reionize the early universe. Tiny galaxies may have helped our universe out of its dark ages, JWST finds (Main) an illustration of a highly redshifted early galaxy (Inset) the earliest galaxy ever detected MoM z14 'Cosmic miracle!' James Webb Space Telescope discovers the earliest galaxy ever seen Swirls of purple light surround a cluster of glowing purple spheres against a dark background Astronomers find bizarre 'Cosmic Grapes' galaxy in the early universe. Here's why that's a big deal (photo) Click here for more Space.com videos... To discover these objects, the team applied a method called the "dropout" technique, which detects objects that appear in redder wavelengths but vanish in bluer, shorter-wavelength images. This indicates the objects are extremely distant, showing the universe as it was more than 13 billion years ago. To estimate distances, the team analyzed the objects' brightnesses across multiple wavelengths to infer redshift, age and mass. JWST's powerful Near-Infrared Camera and Mid-Infrared Instrument are designed to detect light from the farthest reaches of space, making them ideal for studying the early universe. "As the light from these early galaxies travels through space, it stretches into longer wavelengths — shifting from visible light into infrared," Yan said in the statement. "This stretching, called redshift, helps us determine how far away these galaxies are. The higher the redshift, the closer the galaxy is to the beginning of the universe." Next, the researchers hope to use targeted spectroscopic observations, focusing on the brightest sources. Confirming the newly found objects as genuine early galaxies would refine our current understanding of how quickly the first cosmic structures formed and evolved — and add to the growing list of transformative discoveries made by the JWST since it began observing the cosmos in 2022.

Bbies stranded in space, zombie football players, and soap operas with cats – we live in an era when YouTube is almost flooded with videos created by artificial intelligence. The sharp rise of such channels means that a significant amount of content is now generated by AI, not by a person behind the camera. According to analytics, among the hundred fastest-growing channels in the past month, nine published purely AI videos. Examples include plots with a baby being crammed into a rocket just before liftoff into space, unusual images of sports stars, and melodramas featuring anthropomorphic cats. The popularity of such materials is rising as powerful video-creation tools emerge, including Veo 3 and Grok Imagine. The total subscriber count on these channels runs into the millions: about 1.6 million on the baby-in-space channel and nearly 4 million on the anthropomorphic-cat channel, where intrigue sometimes runs to the extremes. Most of these videos fall into the “AI slop” category – mass production of low-quality content, sometimes surreal or eerie, but sometimes with a fairly simple plot, indicating growing technical sophistication of AI content. «All content uploaded to YouTube falls under our community guidelines – regardless of how it was created.» – YouTube representative After inquiries to the platform, it was reported that some channels were removed, while others had monetization restricted. The exact numbers and names were not specified. A digital culture expert described AI-video generators as the next wave of “pollution” of the Internet, a term first proposed by a writer who voiced concerns about the quality of online content. According to him, AI content could undermine users’ trust by substituting high-quality material with quickly generated “garbage” versions. «AI slop fills the Internet with content that is essentially garbage. This pollution undermines online communities on Pinterest, competes for revenue with artists on Spotify, and fills YouTube with low-quality content.» – Dr. Akhil Bhardwaj, University of Bath He also added that one way of regulating is banning monetization of AI content, which would reduce the incentive to create it. Such a step could make the platform less attractive for mass production of AI videos. Ryan Broderick, author of the Garbage Day newsletter, sharply criticizes the impact of AI videos on YouTube, calling the platform a “dumpster” for troubling, cold AI clips and dubious content overall. Alongside YouTube, Instagram is also experiencing an AI-content flow: Reels with hybrid heads of celebrities on animal bodies have drawn millions of views – for example, videos featuring various characters and plots with famous images. TikTok is also seeing viral AI videos: in particular, funny scenarios with stories and cats competing in unusual contests. Meanwhile, platforms require labeling of realistic AI videos, using deepfake-detection systems to reduce the risks of misinformation and manipulation.

With Meta, Google, Samsung, and maybe even Apple working on AI-powered glasses, smart spectacles are quickly becoming the hottest gadget in tech. Now, even HTC is jumping in on the trend with a new pair of Vive Eagle smart glasses that come with built-in speakers, a 12MP ultrawide camera, and an AI voice assistant. The Vive Eagle glasses are currently only available for purchase in Taiwan, but they seem like a direct rival to Meta’s Ray-Ban and Oakley smart glasses. They come with AI-powered image translation, which lets wearers ask the Vive AI voice assistant to translate what they’re seeing into 13 different languages. Other features include the ability to record reminders, ask for restaurant recommendations, and take notes (sound familiar?). HTC says its Vive Eagle glasses weigh just 49 grams, around the same as Meta’s Ray-Ban smart glasses. The Vive Eagle glasses cost around $520 USD and come equipped with Zeiss sun lenses, with options for a red, brown, gray, or black frame. It’s not clear when — or if — HTC plans on bringing these smart glasses to North America or Europe, but Meta might have some competition if it does.

Apple is still hard at work on becoming a relevant player in AI. The latest missive from Mark Gurman at Bloomberg suggests that Apple is shifting its artificial intelligence goals to center on new device segments. Sources reportedly told the publication that Apple has a slate of new smart home products in the works that could help pivot its lagging AI strategy. The center of the new lineup is a tabletop AI companion that has been described as an iPad on a movable robotic arm. It would be able to swivel to face the screen toward a user as they move around their home or office. Sources said the current prototype uses a horizontal display that's about seven inches while the motorized arm can move the screen about six inches away from the base in any direction. Equipped with a long-promised overhaul to the Siri voice assistant, this device could act like an additional person, recalling information, making suggestions and participating in conversations. According to Bloomberg, Apple is targeting a 2027 release for this product. Apple's new lineup is also rumored to include a smart home hub that is a simpler version of the robotic friend with no moving stand. We might be seeing this sooner, with a projected 2026 release for the device. This hub device would be able to control music playback, take notes, browse the web and host videoconferencing. Both the robot companion and the smart home hub are reportedly running a new operating system called Charismatic that's designed to support multiple users. The Siri running on the device will be given a particularly cheery personality, and it may also be getting a visual representation. Bloomberg's sources said there hasn't been a final decision on aesthetics; internal tests have had Siri looking like an animated Finder icon and like a Memoji. Today's scuttlebutt follows on previous reports from Gurman that pointed to Apple's interest in these categories. The idea of a smart home hub was apparently floated at the company as far back as 2022, and it's finally being rumored to have a formal debut some time this year. Robots have also been a topic of interest in Cupertino for some time, with claims that Apple was developing a personal robot dating back at least to last spring. While this Bloomberg piece offers more detail about those hypothetical plans, there's always a chance Apple will change direction or scrap a project.

Google has been in a strange place with autofill for some time. While Google has a comprehensive password manager that spans Android and Chrome, I've never found it as seamless as I might want it to be. That seems to be the focus of the latest update heading to your Google phone. According to a report from 9to5Google, Gboard (the default keyboard on Pixel and my recommendation for all other Android phones too) is going to get a proper shortcut into autofill. Currently, when you go to enter a password you might see a line across the top of the keyboard suggesting some of the passwords you have for that site or app. I've always found this to be hit and miss, with some apps never getting a suggestion and some working perfectly fine. Latest Videos From T3 You may like Google Pixel 9 A Google Pixel update is coming to remind us why we own a phone in the first place Gmail app Android users getting a Gmail upgrade so useful we wonder why it's never happened before Google Pixel 9 Google Pixel 10 leak suggests a hidden upgrade as well as the obvious ones Google Discover on desktop Google Discover finally coming to desktop too – now we'll never get any work done In the future, however, Gboard will ask you if you want to "Use Autofill with Google". This will make a shortcut available, which you can add to the top row of the keyboard. You can then tap that when you land on a site or app that you need autofill details for. Google Pixel 9 (Image credit: Future / Chris Hall) When you tap on the shortcut, you'll have the option to access passwords or payment details, so you can fill in the necessary and get on with your day. However, according to the details, it only shows passwords that are applicable to the property that you're currently trying to access, which won't help if, for example, if the login destination is slightly different. This sometimes happens when a company changes how its login is structured, or if you saved the password when signing into the app and you're now trying to sign into the website, which might identify differently. Although this should be a better way to force the issue, rather than relying on Gboard's current and slightly temperamental offering, the lack of wider searching means that if you can't find the credentials you want, you'll have to go back to the old method of manually searching and using copy-paste. Sign up to the T3 newsletter for smarter living straight to your inbox Get all the latest news, reviews, deals and buying guides on gorgeous tech, home and active products from the T3 experts Your Email Address Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. As for payment, I've found that generally Google Pay works very well, but there are some apps and websites where it just doesn't work and I still find myself manually plugging in the details (the desktop offering through Chrome seems to work much better). Having the option to force-fill payment details could make life a lot easier. 9to5Google reports that this new function is available in the Gboard beta, but it hasn't appeared on my device, so there could be some regional factors at play here too. It sounds promising, although there's no avoiding that autofill on Android is still a bit messy – hopefully, these changes will make the experience better.

Reddit says that it has caught AI companies scraping its data from the Internet Archive’s Wayback Machine, so it’s going to start blocking the Internet Archive from indexing the vast majority of Reddit. The Wayback Machine will no longer be able to crawl post detail pages, comments, or profiles; instead, it will only be able to index the Reddit.com homepage, which effectively means Internet Archive will only be able to archive insights into which news headlines and posts were most popular on a given day. ”Internet Archive provides a service to the open web, but we’ve been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine,” spokesperson Tim Rathschmidt tells The Verge. The Internet Archive’s mission is to keep a digital archive of websites on the internet and “other cultural artifacts,” and the Wayback Machine is a tool you can use to look at pages as they appeared on certain dates, but Reddit believes not all of its content should be archived that way. “Until they’re able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we’re limiting some of their access to Reddit data to protect redditors,” Rathschmidt says. The limits will start “ramping up” today, and Reddit says it reached out to the Internet Archive “in advance” to “inform them of the limits before they go into effect,” according to Rathschmidt. He says Reddit has also “raised concerns” about the ability of people to scrape content from the Internet Archive in the past. Reddit has a recent history of cutting off access to scraper tools as AI companies have begun to use (and abuse) them en masse, but it’s willing to provide that data if companies pay. Last year, Reddit struck a deal with Google for both Google Search and AI training data early last year, and a few months later, it started blocking major search engines from crawling its data unless they pay. It also said its infamous API changes from 2023, which forced some third-party apps to shut down, leading to protests, were because those APIs were abused to train AI models. Reddit also struck an AI deal with OpenAI, but it sued Anthropic in June, claiming Anthropic was still scraping from Reddit even after Anthropic said it wasn’t scraping anymore. “We have a longstanding relationship with Reddit and continue to have ongoing discussions about this matter,” Mark Graham, director of the Wayback Machine, says in a statement to The Verge.
Showing 20 of 129 articles