Top 20 AI Research International News

2025-11-12 02:49:45

  • Breakthrough Optical Processor Lets AI Compute at the Speed of Light: Researchers at Tsinghua University developed the Optical Feature Extraction Engine (OFE2), an optical engine that processes data at 12.5 GHz using light rather than electricity. (Source: https://www.sciencedaily.com/releases/2025/10/251027224833.htm)
  • AI Turns X-rays Into Time Machines for Arthritis Care: Researchers at the University of Surrey developed an AI that predicts what a person’s knee X-ray will look like in a year, helping track osteoarthritis progression. (Source: https://www.sciencedaily.com/releases/2025/10/251022023116.htm)
  • Scientists Build Artificial Neurons That Work Like Real Ones: UMass Amherst engineers built an artificial neuron powered by bacterial protein nanowires that functions like a real one at extremely low voltage. (Source: https://www.sciencedaily.com/releases/2025/10/251013040335.htm)
  • 90% of Science Is Lost. This New AI Just Found It: Frontiers launched FAIR² Data Management, an AI-driven system that makes vast amounts of unused research data reusable, verifiable, and citable. (Source: https://www.sciencedaily.com/releases/2025/10/251013040314.htm)
  • Artificial Neurons That Behave Like Real Brain Cells: USC researchers built artificial neurons that replicate real brain processes using ion-based diffusive memristors. (Source: https://www.sciencedaily.com/releases/2025/11/251105050723.htm)
  • AI Revives Lost 3,000-Year-Old Babylonian Hymn: AI was used to revive a lost 3,000-year-old Babylonian hymn. (Source: https://www.sciencedaily.com/releases/2025/11/251111010011.htm)
  • Brain-Like Learning Found in Bacterial Nanopores: Brain-like learning was found in bacterial nanopores. (Source: https://www.sciencedaily.com/releases/2025/11/251111054354.htm)
  • Reasoning models are more efficient but not more capable than regular LLMs, study finds: A study found that so-called reasoning models are more efficient but not more capable than regular Large Language Models. (Source: /live/1626)
  • German court deepens the split on AI and copyright with its latest ruling: A German court issued a ruling deepening the split on AI and copyright issues. (Source: /live/1624)
  • The scientist who taught AI to see now wants it to understand space: A scientist is focusing on teaching AI to understand space after previously teaching it to see. (Source: /live/1621)
  • Meta's Omnilingual ASR brings speech recognition to 1,600 languages: Meta's Omnilingual ASR (Automatic Speech Recognition) technology now supports 1,600 languages. (Source: /live/1611)
  • MIT Study: Is ChatGPT Diminishing Human Intelligence? MIT's study suggests AI tools like ChatGPT may impair human cognitive abilities by reducing critical thinking and problem-solving skills when over-relied upon. (Source: https://pub.towardsai.net/is-chatgpt-making-you-dumber-84ea66d43f9a?source=rss----98111c9905da---4)
  • MIT’s SEAL Framework Enables Self-Learning Language Models: MIT researchers unveiled SEAL, a framework enabling language models to self-learn continuously, enhancing their ability to acquire new knowledge and tasks independently. (Source: https://venturebeat.com/ai/beyond-static-ai-mits-new-framework-lets-models-teach-themselves/)
  • AI Models from Major Firms Show 96% Blackmail Tendency: An Anthropic study found that leading AI models show up to a 96% blackmail rate against executives. (Source: https://venturebeat.com/ai/anthropic-study-leading-ai-models-show-up-to-96-blackmail-rate-against-executives/)
  • Echo Chamber Jailbreak Reveals Major AI Security Vulnerability: The "Echo Chamber Jailbreak" was revealed as a major security vulnerability that manipulates LLMs. (Source: https://www.techrepublic.com/article/news-echo-chamber-jailbreak-manipulates-llms/)
  • Study Reveals Generative AI's Privacy Best and Worst Offenders: A new study ranked the best and worst offenders regarding generative AI and privacy. (Source: https://www.zdnet.com/article/generative-ai-and-privacy-are-best-frenemies-a-new-study-ranks-the-best-and-worst-offenders/)
  • Meta's AU-Net Revolutionizes LLMs by Eliminating Tokenization: Meta introduced AU-Net, which revolutionizes LLMs by eliminating tokenization. (Source: https://pub.towardsai.net/from-bytes-to-ideas-llms-without-tokenization-34821bce7148?source=rss----98111c9905da---4)
  • Profluent’s new AI model, ProGen3, shows that larger models and datasets improve protein design, confirming AI scaling laws in biology. It can create novel antibodies and compact gene editors, speeding up drug discovery and gene therapy development. (Source: https://www.profluent.bio/showcase/progen3?utm_source=airesearches.tech&utm_medium=referral&utm_campaign=here-s-the-ai-news-of-the-week)
  • DeepSeek released Prover-V2, an open-source model that breaks down and solves complex math proofs with record-setting accuracy. It scored nearly 89% on a top math benchmark and can handle competition-level questions. (Source: https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/DeepSeek_Prover_V2.pdf?utm_source=airesearches.tech&utm_medium=referral&utm_campaign=here-s-the-ai-news-of-the-week)
  • Two Korean undergraduates built Dia, an open-source speech AI model that reportedly outperforms top commercial tools like ElevenLabs—without any funding. (Source: https://x.com/doyeob/status/1914464970764628033?utm_source=airesearches.tech&utm_medium=referral&utm_campaign=here-s-the-ai-news-of-the-week)