Tech Beats-3

machine-learning, mlops, newsletter, news, research
Generated by Stable Diffusion"

👉 Subscribe to my Substack to get the latest news and articles.

I admit, it has been a rather lazy week for me, and I haven’t had the chance to consume as much content as I would have preferred. Nonetheless, I did manage to work on a delightful project called “ME_AI,” which is essentially a basic “markdown editor with AI” capabilities. In fact, I even used this very editor to write this newsletter! It is just a single HTML file. Put on some more frills, there you have your new startup :).

Now, setting aside any self-promotion, I’d like to highlight a few noteworthy updates. While this update will be brief, let’s dive right in…

Papers #

The Stable Signature: Rooting Watermarks in Latent Diffusion Models #

👉Paper

The rapid advancement of generative image modeling has opened up a world of possibilities, from creative design to deepfake detection. However, the ethical implications surrounding their use cannot be ignored. In an effort to address these concerns, thi paper introduces an innovative approach that combines image watermarking and Latent Diffusion Models.

The primary goal of this method is to ensure that all generated images contain an invisible watermark, enabling future detection and identification. The process involves fine-tuning the latent decoder of the image generator, conditioned on a binary signature. A specially trained watermark extractor can then recover this hidden signature from any generated image. A subsequent statistical test is conducted to determine whether the image originates from the generative model.

Instruction Tuning for Large Language Models: A Survey #

👉 Paper

IT (instruction tuning) is a crucial technique that boosts the capabilities and control of large language models (LLMs). By training LLMs on a dataset of (instruction, output) pairs, IT bridges the gap between the next-word prediction objective of LLMs and the users’ desire for LLMs to adhere to human instructions.

This work provides an in-depth review of IT literature, covering the general methodology, dataset construction, model training, and applications in various domains and modalities.

Additionally, it includes an analysis of influential factors such as instruction output generation and dataset size. The paper also explores potential pitfalls, criticism, and efforts to address existing limitations.

Bookmarks #

📌 Alternative model architectures to Transformer (Reddit ML Thread)

📌 Being human

📌 FraudGPT

🎬 Podcast with Mustafa Suleyman (Deep Mind’s co-founder)

🎬 Andrew Ng’s Talk “Opportunities in AI”

📌 Can LLMs learn from a single example?

Open Source #

Token Monster #

image

👉 Code

“TokenMonster is an ungreedy subword tokenizer and vocabulary generator, enabling language models to run faster, cheaper, smarter and generate longer streams of text.”

Hydra-MoE #

image

👉 Code

An attempt to achieve GPT-4 level performance by Mixture of Experts over many LoRA fine-tuned models on different skills.

I’m particularly excited for this project since I’ve been pondering a similar concept but with a slight twist: applying Git like collaboration for LLMs through LoRA-based fine-tuning. Allow me to provide you with the outline of the algorithm I have conceptualized:

While there are several assumptions made in this statement, I find it intriguing as it suggests a potentially innovative approach to decentralized LLM training. It has been occupying my thoughts lately.

Unloop #

image

👉 Code

Creating loops and tunes using generative AI. It uses a model called VampNet. I has the best audio fidelity I’ve seen in the open-source.