The UK Supreme Court has ruled that AI cannot be named as an inventor in a patent application. Initiated by Dr. Stephen Thaler’s AI chatbot, Dabus, the case highlights the evolving legal landscape surrounding AI-related issues. While AI cannot be labeled as an inventor, it can play a role in the invention process. This ruling…
Material scientists at the University of Rochester are using machine learning to expedite the discovery of new crystalline materials with specific properties. By automating the classification of materials based on X-ray diffraction patterns using convolutional neural networks, this approach aims to accelerate materials innovation and benefit various technological applications, from electronics to sustainability.
AI company VERSES made a bold statement with a billboard outside OpenAI’s headquarters, challenging them to collaborate on achieving Artificial General Intelligence (AGI). VERSES CEO Gabriel René called for OpenAI to honor their commitment to support a promising project. VERSES claims their Active Inference approach achieves AGI, surpassing deep learning models with less input data.
TomTom has partnered with Microsoft to develop an AI-powered conversational assistant for vehicles, integrating OpenAI’s large language models. The system promises natural voice interactions and control over onboard vehicle systems. It will be compatible with various automobile interfaces and aims to enhance the driving experience. The technology will be unveiled at CES in January.
University of Washington scientists utilized AI to design new protein molecules, showing potential for disease detection and treatment. AI’s role in revolutionizing drug development is demonstrated in their publication in Nature. By employing advanced AI programs and a new generative AI model called RFdiffusion, the researchers achieved exceptionally high binding affinity and specificity for targeted…
Using comprehensive personal data from Denmark, a team at the Technical University of Denmark developed an AI model, Life2vec, to predict individuals’ risk of death. The model outperformed existing AI models and life tables by 11% and was also able to predict personality outcomes. The study also highlights the ethical considerations surrounding AI’s predictive capabilities.
Hartford released an open-source, uncensored AI model called Dolphin Mixtral by removing alignment from the base Mixtral model. He argues that alignment imposes Western ideologies on diverse users and restricts valid use cases. By training the model with a specific instruction dataset and a humorous prompt, Dolphin Mixtral complies with any user request. This challenges…
OpenAI’s board can override the CEO’s decisions on releasing new AI models, as outlined in their safety guidelines. After CEO dismissal and reinstatement, concerns over model safety and valuation arose. OpenAI’s preparedness team and safety framework aim to address catastrophic risks, assessing AI systems and categorizing risks for model release. The internal safety advisory group…
Former Prime Minister of Pakistan, Imran Khan, utilized AI to deliver a four-minute speech at a virtual rally while in prison. The AI-generated voice closely resembled his own, delivering a message of resilience and defiance against political constraints faced by his party. The rally gained over five million views despite reported internet outages. AI’s political…
Pennsylvania congressional candidate Shamaine Daniels is utilizing an AI robocaller, Ashley, to communicate with prospective voters in multiple languages. Ashley allows for two-way communication, answering questions about Daniels’ campaign and policies. The use of AI in political outreach raises questions about regulation and accountability, as AI technology continues to advance rapidly.
OpenAI’s Superalignment project aims to prepare for the possibility of AI smarter than humans in 10 years. The team’s experiment using GPT-2 to train GPT-4 showed weaker models can guide stronger ones, but also limit their performance. OpenAI seeks solutions to supervising potential superintelligent AI to avoid adverse outcomes. This project involves significant resources and…
The new DeepSouth supercomputer, set to become operational in April 2024, aims to emulate the human brain’s efficiency. With its neuromorphic architecture, it can perform 228 trillion synaptic operations per second, matching the human brain’s capacity. Researchers anticipate its potential to advance AI technology and address energy consumption concerns in data centers.
The Financial Stability Oversight Council (FSOC) has identified AI as a significant risk factor in the US financial system. Treasury Secretary Janet Yellen highlighted concerns in a recent meeting, emphasizing the need for responsible innovation and the application of existing rules for risk management. The FSOC’s annual report lists 14 potential risks, including AI’s impact…
The EU’s historic AI Act established a legal framework with varying levels of scrutiny based on risk categories. Concerns were raised about its impact on European competitiveness, especially for generative AI. Public reactions and industry responses have been mixed, reflecting concerns about stifling innovation and the EU’s ability to compete globally in the tech industry.
OpenAI has partnered with Axel Springer to provide global news summaries to ChatGPT users, aiming to support independent journalism in the age of AI. The partnership will offer content from media brands, including Politico and Business Insider, and address concerns about biased news and the impact of AI on journalism. This signifies a new approach…
RAND Corporation, linked to tech billionaires’ funding networks, had significant involvement in drafting President Biden’s AI executive order. The order, influenced by effective altruism, introduced comprehensive AI reporting requirements. RAND’s ties to Open Philanthropy and AI enterprises have raised concerns about potential research skewing. The AI industry’s intersection with effective altruism, commercialization, and ethics remains…
Microsoft’s new Medprompt technique boosts GPT-4 to edge out Google’s Gemini Ultra on MMLU benchmark tests by a narrow margin. The technique involves dynamic few-shot learning, self-generated chain of thought prompting, and choice shuffle ensembling, proving older AI models can surpass expectations when prompted cleverly. The approach offers exciting possibilities but may require additional processing…
DeepMind researchers unveiled “FunSearch,” using Large Language Models to generate new mathematical and computer science solutions. FunSearch combines a pre-trained LLM to create code-based solutions, verified by an automated evaluator, refining them iteratively. It has successfully provided novel insights into key mathematical problems and demonstrated potential in broad scientific applications, marking a transformative development in…
AI-generated disinformation is threatening the upcoming Bangladesh national elections. Pro-government groups are using AI tools to create fake news clips and deep fake videos to sway public opinion and discredit the opposition. The lack of robust AI detection tools for non-English content exacerbates the problem, highlighting the need for effective regulatory measures.
This week in AI news: – Oxford University permits AI use in Economics and Management courses, sparking debate. – Google’s deceptive Gemini marketing video raises questions about authenticity. – LimeWire returns with an AI-generated music platform, and Meta AI’s image generator makes an impact. – ChatGPT and other AI technologies face performance and ethical challenges.…