Research - An Introduction to Text Analysis with Transformers and LLMs in Python
Transformers and Large Language Models (LLMs) are widely used in various text analysis applications, including text classification, text generation, summarisation, translation, and chatbots. In this course, you will learn how Transformers and LLMs work and how to apply them practically to tasks such as text classification, text generation, and effective chatbot usage using Python programming.
Objectives
Acquire the key competencies needed to use LLMs for text classification, text generation and effective chatbot usage, including prompt engineering to enhance productivity for researchers
Target audience
Any PhD students, post-docs, researchers of UNIL who would like to use LLMs in their research
Content
At the end of the course, the participants are expected to:
- Understand how Transformers and LLMs work
- Be able to use LLMs for text classification, text generation and effective chatbot usage in Python
- Master prompt engineering tailored for researchers
Length
1 day
Organization
Once per year
Location
In-person
Practicals
Prerequisites
- Basic knowledge of deep learning: we assume that you know how simple feedforward neural networks work, including how to interpret accuracy and loss curves (for example by attending the course "A Gentle Introduction to Deep Learning with Python and R").
- Be comfortable with Python programming
IMPORTANT: To do the practicals
- On UNIL JupyterLab: You need to be able to access the eduroam wifi with your UNIL account or via the UNIL VPN
- On your laptop: No account requirement
- On Curnagl: Please register using your UNIL email address
- Note that in all cases you need to bring your own laptop