diff --git a/_posts/2024-10-30-Understanding-Transformers-Part-2-:-Building-Blocks.md b/_posts/2024-10-30-Understanding-Transformers-Part-2-:-Building-Blocks.md index bcadb269bedec..ff02076cfc3e3 100644 --- a/_posts/2024-10-30-Understanding-Transformers-Part-2-:-Building-Blocks.md +++ b/_posts/2024-10-30-Understanding-Transformers-Part-2-:-Building-Blocks.md @@ -221,7 +221,7 @@ embeddings = embedding_layer(training_data_inputs) Now, since our progression for understanding and implementing the Transformers is based on the chronology in which the elements appear in the model itself, we will revisit the Transformer Architecture diagram once more to understand the next part we need to understand. -![Transformer]({{site.baseurl}}/images/transformer_architecture.jpg) +![Transformer]({{site.baseurl}}/images/transformer_architecture_cropped.jpg) ## Positional Encoding