This Week’s Major AI developments in a nutshell (December Week 3, 2023)
This Week’s Major AI developments in a nutshell (December Week 3, 2023)

This Week’s Major AI developments in a nutshell (December Week 3, 2023)

  1. Researchers from Switzerland’s ETH Zurich unvieled CyberRunner, an AI robot can play the popular labyrinth marble game requiring physical skills. It outperforms the previously fastest recorded time by a skilled human player, by over 6%. CyberRunner found ways to ’cheat’ by skipping certain parts of the maze during the learning process. [Details].
  2. Google Research introduced VideoPoet, a large language model (LLM) that is capable of a wide variety of video generation tasks, including text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio (can output audio to match an input video without using any text as guidance) [Details | Demos].
  3. NVIDIA Research presents Align Your Gaussians (AYG), a method for Text-to-4D that combines text-to-video, text-guided 3D-aware multiview and regular text-to-image diffusion models to generate high-quality dynamic 4D assets [Details].
  4. MIT and Harvard researchers used AI to screen millions of chemical compounds to find a class of antibiotics capable of killing two different types of drug-resistant bacteria [Details].
  5. Microsoft Copilot, Microsoft’s AI-powered chatbot, can now compose songs via an integration with GenAI music app Suno [Details].
  6. Stable Video Diffusion, the foundation model from Stability AI for generative video, is now available on Stability AI Developer Platform API [Details].
  7. Hugging Face adds MLX models on the hub for running the models directly on Macs: Phi 2, Llama-based models (CodeLlama, TinyLlama, Llama 2), Mistral-based models (Mistral, Zephyr) and Mixral included [Link].
  8. Apple published a research paper, ‘LLM in a flash: Efficient Large Language Model Inference with Limited Memory’, that tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM [Link].
  9. Upstage released SOLAR-10.7B, a 10.7 billion (B) parameter model built on the Llama2 architecture and integrated with Mistral 7B weights into the upscaled layers [Details].
  10. Mixtral-8x7B show strong performance against GPT-3.5-Turbo on LMSYS’s Chatbot Arena leaderboard. Chatbot Arena is a crowdsourced, randomized battle platform using user votes to compute Elo ratings [ Leaderboard].
  11. Sarvam AI and AI4Bharat released OpenHathi-7B-Hi-v0.1-Base, a 7B parameter model based on Llama2, trained on Hindi, English, and Hinglish [Details].
  12. Alibaba research presented FontDiffuser, a diffusion-based image-to-image one-shot font generation method that excels on complex characters and large style variations [Details].
  13. OpenAI introduced Preparedness Framework, a living document describing OpenAI’s approach to develop and deploy their frontier models safely [Details].

Source: AI Brews - you can subscribe here. it's free to join, sent only once a week with bite-sized news, learning resources and selected tools. Thank you!

submitted by /u/wyem
[link] [comments]