This week’s AI news features the following highlights:
1. Taylor Swift’s battle against explicit AI deep fake images and the concerning ease of generating such content using AI.
2. The rise of political deep fakes showcasing AI’s capabilities in replicating voices with convincing realism and the challenges of detecting these fakes.
3. OpenAI’s evolving transparency issues and ambitions in AI chip production.
4. GPT-4’s diverse capabilities, including generating creative resume entries and aligning with expert doctors’ treatment recommendations.
5. Elon Musk’s Neuralink’s successful human brain implant and his pursuit of funds for his xAI project.
6. Potential risks of AI agents without supervision and the efforts to enhance their safety.
7. The upcoming EU AI Act Summit 2024 and the regulatory implications for OpenAI and Microsoft.
Additionally, other noteworthy AI developments include the launch of the National Artificial Intelligence Research Resource pilot program and debates surrounding AI companies reporting safety tests to the US government.
For more details, visit DailyAI.
Welcome to this week’s roundup of artisanal handcrafted AI news.
Swift injustice
Taylor Swift became the target of explicit AI deep fake images, sparking outrage. InstantID enables AI image generators to create reproductions from a single image of a person’s face, making it easier for anyone to create these kinds of images.
Unbelievable
AI deep fakes of voices have improved dramatically, making it harder to detect. OpenAI’s operations have become more opaque, diverging from its founding principles, and it aims to produce its own AI chips.
GPT-4 gets brainy
GPT-4 has been found to agree with expert doctors on recommended treatments for stroke victims. Additionally, Neuralink completed its first brain implant in a human subject, potentially allowing for direct communication between the brain and devices.
Safety first
AI agents let loose on the internet without supervision pose a real danger. Researchers proposed three things that could increase visibility into AI agents to make them safer.
AI in EU
The upcoming EU AI Act Summit 2024 will be an ideal opportunity to discuss AI regulation proposals and get to grips with the EU AI Act and its global implications. Some civil rights groups are calling for the EU to probe OpenAI and Microsoft.
In other news…
The U.S. National Science Foundation launched the National Artificial Intelligence Research Resource (NAIRR) pilot program. AI companies will need to start reporting their safety tests to the US government. An ex-board member is critical of the risk associated with OpenAI’s current board structure and the power it holds.
If you want to evolve your company with AI, stay competitive, and use it to your advantage, discover how AI can redefine your way of work. Connect with us at hello@itinai.com for AI KPI management advice and continuous insights into leveraging AI. For AI solutions, visit itinai.com.