On-device Continual Learning of LLM
This thesis investigates the continual learning of state-of-the-art, open-sourced large language model (LLM) architectures like Llama, T5, and Mamba on low-power embedded platforms, including ARM CPUs, ARM embedded GPUs, and NVIDIA embedded GPUs. The focus is on evaluating the latency, power consumption, and thermal behavior of various continual learning techniques, with an emphasis on overcoming the challenges posed by limited computational power and resources on these devices.
Through benchmarking continual learning methods on these platforms, this research identifies system-level bottlenecks and optimization opportunities. The goal is to enable sustainable and efficient on-device continual learning, essential for real-time adaptation in AI-driven edge applications such as IoT, smart devices, and autonomous systems.
By addressing key constraints in hardware and learning algorithms, this thesis contributes to the broader field of embedded AI, proposing practical solutions for deploying advanced LLMs in environments where both power and computational resources are scarce.
Requirements
Hands-on experience with DNN tranining and deployment.
Basic knowledge of Python and C++.
Thesis Type
Semesterarbeit
Contact
Gebäude 5501 Raum 2.102a
+49 (89) 289 - 55183