AI News

  • Introducing three new NVIDIA GPU-based Amazon EC2 instances

    Amazon announces the expansion of its EC2 accelerated computing portfolio with three new instances powered by NVIDIA GPUs: P5e instances with H200 GPUs, G6 instances with L4 GPUs, and G6e instances with L40S GPUs. These instances provide powerful infrastructure for AI/ML, graphics, and HPC workloads, along with managed services like Amazon Bedrock, SageMaker, and Elastic…

    Read more →

  • New method uses crowdsourced feedback to help train robots

    A novel technique allows an AI agent to use data crowdsourced from nonexpert human users to learn and complete tasks through reinforcement learning. This approach trains the robot more efficiently and effectively compared to other methods.

    Read more →

  • AI-generated sexually explicit material is spreading in schools

    Children in the UK are using AI image generators to create indecent images of other children, according to the UK Safer Internet Centre (UKSIC). The charity has highlighted the need for immediate action to prevent the problem from spreading. The creation, possession, and distribution of such images is illegal in the UK, regardless of whether…

    Read more →

  • “Authentic” the Merriam-Webster word of the year, but why?

    Merriam-Webster has chosen “authentic” as its Word of the Year for 2023 due to its increased relevance in the face of fake content and deep fakes. The word has multiple meanings, including being genuine and conforming to fact. This decision reflects the current crisis of authenticity in a world where trust is challenged by the…

    Read more →

  • Boost inference performance for LLMs with new Amazon SageMaker containers

    Amazon SageMaker has released a new version (0.25.0) of Large Model Inference (LMI) Deep Learning Containers (DLCs) with support for NVIDIA’s TensorRT-LLM Library. This upgrade provides improved performance and efficiency for large language models (LLMs) on SageMaker. The new LMI DLCs offer features such as continuous batching support, efficient inference collective operations, and quantization techniques.…

    Read more →

  • Unveiling the Frontiers of Scientific Discovery with GPT-4: A Comprehensive Evaluation Across Multiple Disciplines for Large Language Models

    Language models like GPT-4, which are part of the field of Artificial Intelligence, have gained popularity due to their remarkable capabilities in various fields. These models excel in tasks such as coding, mathematics, law, and understanding human intentions. GPT-4 can process text, images, and even display characteristics of Artificial General Intelligence (AGI). Recent research has…

    Read more →

  • UK and US develop new global guidelines for AI security

    UK and US cyber security agencies have developed guidelines to enhance the security of AI systems. The guidelines focus on secure design, development, deployment, and operation, aiming to prevent cybercriminals from hijacking AI and accessing sensitive data. While the guidelines are non-binding, they have the endorsement of 16 countries. However, the prevalence of zero-day vulnerabilities…

    Read more →

  • Microsoft Releases Orca 2: Pioneering Advanced Reasoning in Smaller Language Models with Tailored Training Strategies

    Microsoft introduces Orca 2, an advanced reasoning model for smaller language models. Unlike traditional imitation learning, Orca instructs models in different reasoning techniques to improve their reasoning and comprehension skills. Orca 2 outperforms other models in various language tasks and achieves high accuracy. The departure from imitation learning showcases a new approach to unlocking the…

    Read more →

  • Courage to learn ML: Demystifying L1 & L2 Regularization (part 1)

    L1 and L2 regularization are techniques used in machine learning to prevent overfitting. Overfitting occurs when a model is too complex and learns from both the underlying patterns and the noise in the training data, resulting in poor performance on unseen data. L1 and L2 regularization add penalty terms to the model’s loss function, discouraging…

    Read more →

  • AWS AI services enhanced with FM-powered capabilities

    AWS has announced updates to its AI services, including language support and summarization capabilities. Amazon Transcribe now supports over 100 languages, improving accuracy and adding features like automatic punctuation and speaker diarization. Amazon Transcribe Call Analytics offers generative AI-powered call summarization, saving time for agents and managers. Amazon Personalize introduces the Content Generator, allowing companies…

    Read more →

  • Unpacking the hype around OpenAI’s rumored new Q* model

    OpenAI’s recent CEO ousting has generated speculation about a supposed AI breakthrough, revealing a new powerful model called Q* capable of solving grade-school math. Experts note that while AI models struggle with math problems, solving them would be a significant development. However, this does not signify the birth of superintelligence or pose an immediate threat.…

    Read more →

  • This AI Paper Introduces ‘Lightning Cat’: A Deep Learning Based Tool for Smart Contracts Vulnerabilities Detection 

    Researchers from Salus Security have introduced an AI solution called “Lightning Cat” that uses deep learning techniques to detect vulnerabilities in smart contracts. The solution utilizes optimized deep learning models, including CodeBERT, LSTM, and CNN, to accurately identify vulnerabilities and improve semantic analysis. Experimental results show that the Optimized-CodeBERT model achieves impressive performance in vulnerability…

    Read more →

  • Global Collaboration for Secure AI: U.S., U.K., and 18 Countries Unveil New Guidelines

    The United States, United Kingdom, and 16 other partners have released comprehensive guidelines for developing secure artificial intelligence systems. Led by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK’s National Cyber Security Centre (NCSC), the guidelines prioritize a ‘secure by design’ approach, covering all stages of AI system development. The guidelines also…

    Read more →

  • Google Bard Can Now Summarize Youtube Videos For You

    Google’s Chatbot ‘Bard’ has introduced a groundbreaking “YouTube Extension” that allows users to extract specific details from YouTube videos by asking questions. This advancement showcases Bard’s ability to comprehend visual media, improving user engagement. Bard was found to be accurate and swift in summarizing video content, setting it apart from other chatbots in the AI…

    Read more →

  • ByteDance Introduces PixelDance: A Novel Video Generation Approach based on Diffusion Models that Incorporates Image Instructions with Text Instructions

    Researchers from ByteDance have introduced PixelDance, a video generation approach that combines text and image instructions to create complex and diverse videos. The system excels in synthesizing videos with intricate settings and actions, surpassing existing models. It integrates diffusion models and Variational Autoencoders and outperforms previous models in terms of video quality. While the model…

    Read more →

  • Researchers use synthetic data to train AI image classifier

    MIT researchers have developed a method called StableRep to address the scarcity of training data for AI image classifiers. They used a strategy called “multi-positive contrastive learning” to generate synthetic images that match a given text prompt. The resulting image classifier, StableRep+, outperformed models trained on real images. While there are challenges such as computation…

    Read more →

  • Researchers from China Introduce Video-LLaVA: A Simple but Powerful Large Visual-Language Baseline Model

    Researchers from Peking University, Peng Cheng Laboratory, Peking University Shenzhen Graduate School, and Sun Yat-sen University have introduced Video-LLaVA, a Large Vision-Language Model (LVLM) approach that unifies visual representation into the language feature space. Video-LLaVA surpasses benchmarks in image question-answering and video understanding, outperforming existing models and showcasing improved multi-modal interaction learning. The model aligns…

    Read more →

  • New method uses crowdsourced feedback to help train robots

    Researchers from MIT, Harvard University, and the University of Washington have developed a new approach to reinforcement learning that leverages feedback from nonexpert users to teach AI agents specific tasks. Unlike other methods, this approach enables the agent to learn more quickly despite the noisy and potentially inaccurate feedback. The method has the potential to…

    Read more →

  • Elevate your self-service assistants with new generative AI features in Amazon Lex

    Generative AI is revolutionizing the conversational AI industry by enabling more natural and intelligent interactions. Amazon Lex has introduced new features that take advantage of these advances, such as conversational FAQs, descriptive bot building, assisted slot resolution, and training utterance generation. These features make it easier for developers to build chatbots that provide personalized customer…

    Read more →

  • Researchers from UCL and Google DeepMind Reveal the Fleeting Dynamics of In-Context Learning (ICL) in Transformer Neural Networks

    In-context learning (ICL) is the capacity of a model to modify its behavior at inference time without updating its weights, allowing it to tackle new problems. Neural network architectures, such as transformers, have demonstrated this capability. However, recent research has found that ICL in transformers is influenced by certain linguistic data characteristics. Training transformers without…

    Read more →