Italy’s data protection authority, Garante, probes OpenAI’s ChatGPT over potential GDPR violations. Concerns relate to mishandling of personal data, lack of age verification, and generation of inaccurate user information. OpenAI asserts GDPR compliance and minimal personal data inclusion. In the US, FTC investigates AI startups’ ties to tech giants, prompting calls for antitrust inquiries. Regulatory…
InstantID is a zero-shot plugin that allows generative AI models to create consistent and personalized images using a single reference face image without the need for fine-tuning LoRAs. This poses both benefits and risks, including the potential for misuse in creating offensive or culturally inappropriate images. The tool is expected to revolutionize AI-generated image production.…
AI voice cloning technology is causing concern as its use becomes more widespread and harder to detect. Recent events, such as a controversial audio recording of a high school principal, highlight the potential for reputational damage and the challenges in verifying the authenticity of such recordings. The technology’s advancement raises complex issues and poses a…
Microsoft’s deepening relationship with OpenAI has prompted scrutiny over competition within the AI sector. Civil society organizations, including Article 19, urge the EU and UK competition authorities to investigate the partnership’s potential anticompetitive impact. They emphasize the need for regulatory scrutiny to ensure fair competition and innovation in the AI domain.
A new pre-print study has shown GPT-4’s potential to aid in treating stroke patients. Analysing data from 100 patients, the AI’s treatment recommendations closely aligned with expert neurologists and real-world medical practice, demonstrated by a high Area Under the Curve (AUC) of 0.85 and 0.80, respectively. GPT-4 also accurately predicted 90-day post-stroke mortality risk.
OpenAI CEO Sam Altman visited South Korea to meet with top Samsung Electronics and SK Group executives as part of efforts to bring AI chip production in-house. With plans to raise funds for chip fabrication plants and secure High Bandwidth Memory from Korean companies, OpenAI aims to reduce dependence on NVIDIA and Taiwan Semiconductor Manufacturing…
Elon Musk announced the first successful human trial of Neuralink’s brain implant, “Telepathy,” allowing control of devices simply through thought. Targeting individuals with limited hand mobility, the implant aims to restore autonomy and unlock human potential. The fusion of AI and brain-machine interfaces could revolutionize communication speed and capability, paving the way for an inevitable…
Microsoft is poised for its best quarterly growth in nearly two years, with a projected 15.8% revenue rise. Its alliance with OpenAI has propelled it to a $3 trillion valuation, establishing dominance in AI. Analysts project strong growth for Azure due to increased demand for AI services, despite competition from AWS and Google Cloud.
The Biden administration is compelling cloud service providers to disclose foreign users developing AI technologies, particularly in China. This aims to restrict access to essential data centers and servers and curb perceived malicious cyber-enabled activities. US-China tensions in AI escalate, with the US enforcing strategies to maintain a technological edge and national security.
The late comedian George Carlin’s estate is suing the creators of an AI-generated video impersonating Carlin, claiming copyright infringement and violation of Carlin’s right to publicity. It was initially believed that the show was created by an AI, but the creators have stated that it was actually written by a human. The lawsuit raises questions…
Researchers from The Wharton School explored methods to enhance GPT-4’s creativity in idea generation. Experimenting with various prompting strategies, they found that longer prompts and Chain of Thought (CoT) instructions resulted in more diverse ideas. While GPT-4’s ideas were initially similar, strategic prompting improved diversity, making it a valuable tool in brainstorming.
AI deep fakes, created by advanced technology, blur the line between reality and fiction, making it challenging to distinguish authentic content from manipulated media. This has prompted concerns about their potential impact on democratic processes, as numerous incidents involving political figures around the world continue to escalate in frequency and severity.
A study by Canva and Sago shows that 45% of job seekers globally use AI to enhance their resumes. Surprisingly, 90% of hiring managers find this practice appropriate, with nearly half embracing AI’s use for interview content creation. It’s predicted that traditional text-only resumes may become obsolete in the near future. Additionally, research confirms that…
The AI-generated deep fake images of Taylor Swift sparked widespread criticism and concerns over misinformation. Microsoft CEO Satya Nadella expressed alarm and urged action to implement stricter regulations and collaborative efforts between law enforcement and tech platforms. The incident also prompted public outrage and a digital manhunt, demonstrating the far-reaching impact of deep fake crimes.
Researchers propose three measures to increase visibility into AI agents for safer functioning: agent identifiers, real-time monitoring, and activity logs. They identify potential risks, including malicious use, overreliance, delayed impacts, multi-agent risks, and sub-agents. The paper stresses the need for governance structures and improved visibility to manage and mitigate these risks.
The EU AI Act Summit 2024, held in London on February 6, 2024, focuses on the groundbreaking EU AI Act, offering practical guidance for stakeholders. The Act introduces comprehensive AI regulations, categorized by risk levels, and revolving around compliance responsibilities and opportunities for the industry. The summit features notable speakers, sessions, and registration discounts. Visit…
The recent RAND report concludes that current Large Language Models (LLMs) do not significantly increase the risk of a biological attack by non-state actors. Their research, conducted through a red-team exercise, found no substantial difference in the viability of plans generated with or without LLM assistance. However, the study emphasized the need for further research…
This week’s AI news highlights AI excelling in math tests and stirring debate about fake truths. Google unveiled its text-to-video model, while OpenAI ventured into education and faced criticism for data practices. Other developments include legal regulations for AI hiring and Samsung’s collaboration with Google in AI-rich mobile phones. Meanwhile, AI’s impact on healthcare and…
OpenAI, initially transparent, now withholds key documents and adopts a for-profit model, drawing concern about departing from its open collaboration and public research promises. Significant investment from Microsoft transformed OpenAI and triggered leadership controversies. The company’s transition and restricted transparency reflect a departure from its original ethos.
North Korea’s increasing foray into AI and ML is highlighted in a report by Hyuk Kim from the James Martin Center for Nonproliferation Studies. It delves into the nation’s historic AI achievements, current developments, and the dual-use potential of AI in civilian and military applications, as well as highlighting its cybersecurity threats.