July 16th, 2024

The Engineer's Guide to Deep Learning: Understanding the Transformer Model

The Transformer model, a key advancement in AI since 2017, is explored in Hironobu Suzuki's guide. It offers insights, Python code examples, and emphasizes its significance in engineering and future innovations.

Read original articleLink Icon
The Engineer's Guide to Deep Learning: Understanding the Transformer Model

In the third golden age of AI, the Transformer model has emerged as a significant breakthrough since its introduction in 2017, impacting various fields beyond machine translation. Hironobu Suzuki's guide aims to help engineers understand the Transformer efficiently, offering concise explanations and practical Python code examples for hands-on learning. The document covers essential topics such as neural networks, recurrent neural networks (RNNs), natural language processing (NLP), attention mechanisms, and the Transformer model itself. Additionally, it includes a section on basic Python and mathematics knowledge necessary for grasping the Transformer concepts. Suzuki, a software programmer and author, emphasizes the importance of the Transformer in modern engineering and hints at future breakthroughs in Transformer-based technologies. The guide is available for educational purposes with specific guidelines for commercial use, reflecting Suzuki's dedication to sharing knowledge while protecting his intellectual property rights.

Link Icon 12 comments
By @MAXPOOL - 9 months
There are many others that are better.

1/ The Annotated Transformer Attention is All You Need http://nlp.seas.harvard.edu/annotated-transformer/

2/ Transformers from Scratch https://e2eml.school/transformers.html

3/ Andrej Karpathy has really good series of intros: https://karpathy.ai/zero-to-hero.html Let's build GPT: from scratch, in code, spelled out. https://www.youtube.com/watch?v=kCc8FmEb1nY GPT with Andrej Karpathy: Part 1 https://medium.com/@kdwa2404/gpt-with-andrej-karpathy-part-1...

4/ 3Blue1Brown: But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning https://www.youtube.com/watch?v=wjZofJX0v4M Attention in transformers, visually explained | Chapter 6, Deep Learning https://www.youtube.com/watch?v=eMlx5fFNoYc Full 3Blue1Brown Neural Networks playlist https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_6700...

By @revskill - 9 months
Transformer tutorial is like the new "Monad tutorial".
By @yobbo - 9 months
This is a very compressed work-through from perceptron to transformer.

When he is working through the gradients of an LSTM, for example, it is to help understanding, not help you implement it in your favourite framework.

When he is showing solutions in various frameworks, the purpose is to help create connections between what the math looks like and what code can look like.

By @uoaei - 9 months
One of the most frustrating things about all the documentation on Transformers is the sole emphasis on NLP.

In particular, one of the most interesting parts of the Transformer architecture to me is the attention mechanism which is permutation invariant (if not for the positional embeddings people use to counteract this inherent quality of attention layers). Also the ability to arbitrarily mask this or that node in the graph -- or even individual edges -- gives the whole thing so much flexibility for encoding domain knowledge into your architecture.

Positional embeddings may still be required in many cases but you can be clever about them beyond the overly restrictive perspective of attention layer inputs purely as one-dimensional sequences.

By @tuyguntn - 9 months
question to experts of HN in ML/AI. Could you please share the beginner resources you think would worth for a person who wants to switch their domain from CRUD/backend APIs to ML/AI. There seems to be many branches of this domain, not sure where to start.

Is my understanding correct?

    * ML engineer -> engineer who builds ML models with pytorch (or similar frameworks)
    * AI engineer -> engineer who builds applications on top of AI solutions (prompt engineering, OpenAI, Claude APIs,....)
    * ML ops -> people who help with deploying, serving models
By @gregw2 - 9 months
No content besides a few paragraphs of intro. Actual content has 404 not found errors.
By @alister - 9 months
> When you send me an email, please provide at least two SNS [social networking service] addresses (e.g. LinkedIn, Twitter) for verification purposes. ... I no longer accept contact from anonymous individuals.

It's pretty sad to see that social networking is being adopted as an identification and trust mechanism even by technical people. It was bad enough when some governments began demanding social networking usernames for visa/immigrant screening, but we can't even send an email without social proof to other technical people now?

By @shubham13596 - 9 months
Very good resource on building from the basics in a concise manner.
By @_giorgio_ - 9 months
It uses keras, which is obsolete. Nobody uses that thing anymore.

Stay away from this.

By @benterix - 9 months
> In contrast, the AI technology of the current golden age, which began in the mid-2010s, has consistently exceeded our expectations.

Well, until recently, that is. It looks like we hit the wall as for what LLMs can do - some might call it a plateau of productivity. Namely, as far as coding is concerned, LLMs can successfully create chunk of code of limited length and tiny programs, can also review small pieces of code and suggest improvements that are not related to the context of the whole program (unless it can fit in the context window). In spite of huge effort put in creating a system where LLM agents could work together to create software such as AutoGPT, no non-trivial program has been created in this way so far.