Artificial Intelligence
Researchers from the Shanghai AI Lab and MIT have presented the Hierarchically Gated Recurrent Neural Network (HGRN) for efficient sequence modeling. The HGRN integrates forget gates to better handle long-term dependencies in tasks like language modeling and image classification. It surpasses traditional RNNs and Transformers by balancing training efficiency and sequence complexity, with promising results…
Researchers from The Hong Kong University of Science and Technology and Sun Yat-sen University have developed Photo-SLAM, an innovative framework for real-time localization and photorealistic mapping with RGB-D, stereo, and monocular cameras. Photo-SLAM addresses scalability and operational limitations of existing methods and achieves high-fidelity scene rendering at up to 1000 fps. It utilizes Gaussian Pyramid…
The study addresses local private mean estimation of high-dimensional vectors, noting sub-optimal error or high complexity in existing solutions. A new framework, ProjUnit, is proposed, which offers computationally efficient algorithms with low communication complexity and near-optimal error by projecting inputs to a random low-dimensional subspace before normalization.
A study found that observing soft robots assisting with tasks alleviated viewers’ safety worries and job security fears, suggesting a psychological edge over traditional hard-material robots.
A University of Geneva study, led by Alexandre Pouget, demonstrated a machine-learning algorithm can identify Bordeaux red wines’ chateaux of origin by their chemical profiles with 100% accuracy. The algorithm also recognized vintage years with 50% accuracy and confirmed the chemical foundation of terroir.
The text discusses integrating Amazon Comprehend and Amazon Kendra to enrich enterprise search capabilities. Structured and unstructured data are rapidly growing, and using custom metadata helps categorize information. Amazon Comprehend can identify document types and entities, which Amazon Kendra then uses to filter search results, including facets for better searching. The solution is particularly applied…
Stability AI’s SDXL Turbo utilizes Adversarial Diffusion Distillation (ADD) for rapid, high-fidelity text-to-image synthesis, outperforming multi-step models with a single-step process, detailed in their research paper. It’s demonstrated in real-time on Clipdrop and hailed for its exceptional image quality and speed on computational platforms.
Research from various institutions proposes the GREAT PLEA ethical framework for generative AI in healthcare, mirroring military ethics, to ensure transparency, fairness, and empathy in AI deployment, and calls for user education on AI systems to improve trust and patient care.
Small Language Models (SLMs) are emerging as an efficient, adaptable, and secure alternative to Large Language Models, offering benefits in training cost, deployment, transparency, and accuracy for resource-constrained applications. SLMs like DistilBERT, Orca 2, and versions of BERT are increasingly applied in customer service, product development, email automation, and personalized marketing.
Protopia AI and AWS have partnered to provide a tool called Stained Glass Transform (SGT), enabling businesses to deploy large language models (LLMs) securely without compromising data privacy. SGT protects sensitive information in prompts and fine-tuning data by converting them into randomized re-representations, preserving usability and accuracy. This facilitates responsible AI implementation and competitive advantage…
Lasso Security discovered 1,681 exposed API tokens with varying access levels in code on HuggingFace and GitHub, posing significant security risks. Tokens could potentially allow unauthorized modifications to popular AI models, with consequences if misused. The issue was addressed by revoking the compromised tokens.
AstraZeneca invests $247 million in Absci to develop an AI-generated antibody for unspecified cancer treatment. Absci’s AI platform aims to accelerate discovery by simulating protein interactions and validation in wet-labs, potentially revolutionizing oncology drug development with a promised rapid advancement cycle.
This paper, accepted for NeurIPS 2023’s Diffusion Models workshop, discusses the challenges in adapting score-based generative models to various data domains and proposes a solution using a functional view of data for a unified representation and reformulated score function.
A study reveals that artificial intelligence systems, used in areas like self-driving cars and medical imaging, are more susceptible to deliberate attacks that can trigger incorrect decisions than previously understood.
The study presented at NeurIPS 2023’s Generative AI and Biology workshop focuses on converting 2D molecular structures into 3D conformations using a novel, scalable diffusion model on Riemannian Manifolds, achieving competitive results without assuming molecule structure.
Retraining customer churn prediction models is vital but challenging, especially when distinguishing the effects of interventions on customer behavior. Control groups, feedback surveys, and uplift modeling can address these biases, enabling more accurate predictions and focused retention strategies. Continual refinement and adaptation are key to future success.
A new integer-to-string conversion algorithm, called “LR printer,” outperforms the optimized standard algorithm by 25-38% for 32-bit and 40-58% for 64-bit integers. It’s beneficial for applications that generate large text files with numerous integers, affecting performance notably in data-heavy fields like Data Science and Machine Learning. The C++ implementation is available on GitHub.
The paper, presented at the NeurIPS 2023 ICBINB workshop, examines the use of pre-trained language models in text-to-image auto-regressive generation, finding them of limited utility and providing a twofold analysis related to cross-modality tokens.
Google researchers identified a method to retrieve parts of OpenAI’s ChatGPT training data by prompting repeated words, revealing sensitive information. Investing $200, they extracted over 10,000 examples. The findings raise security and privacy concerns amidst lawsuits accusing OpenAI of misusing private data for ChatGPT training.
Yann LeCun, Meta AI’s chief and deep learning pioneer, has expressed skepticism about the near-term development of artificial general intelligence (AGI) and quantum computing’s role in AI. He contrasts industry leaders by downplaying imminent AGI breakthroughs and doubts AI will match human intelligence soon. He also emphasizes the need for multimodal AI systems and democratizing…