As a DevOps engineer, do you rely on ChatGPT but face restrictions at work due to company policy or sensitive data? Are you curious about hosting your own Large Language Model (LLM)?
In the one-day workshop Generative AI for DevOps Engineers at AT Computing, you will learn how to set up and run your own local version of ChatGPT. Everything is built on open-source technologywithout sending any data to the cloud.
By attending this course, you will directly meet the EU AI Act requirements on AI Literacy: risks, privacy, data sharing, model bias, and hallucinations are all thoroughly addressed. This ensures you not only gain hands-on skills, but also remain fully compliant with upcoming regulations.
The day begins with an introduction to Generative AI, exploring available LLMs and how to use them effectively. You will then dive into the hardware side, including GPU acceleration on virtual machines and in containers.
Through hands-on exercises, you will set up your own LLM server and even create a custom model. You will also learn how to connect a web-based client to your LLMeffectively building your own ChatGPT clone.
On top of that, you will explore how to integrate the LLM API with Python, apply Retrieval Augmented Generation (RAG) to work with your own documents, analyze images, and even perform log analysis using your LLM.