Senior GenAI Developer
The Position
We are seeking an experienced AI / LLM Engineer to join our Data-Driven Customer Insights team and lead innovative Large Language Model and AIdriven capabilities. In this role, you will contribute to the OneCustomer product, an AI-driven initiative at Boehringer Ingelheim aimed at transforming how we engage with healthcare professionals and internal stakeholders. The DevOps team builds and operates scalable digital solutions that enable real-time sharing of customer insights, leveraging data and cutting-edge technologies to support strategic decision-making across Commercial, Medical, and Analytics functions.
As an AI/LLM Engineer, you will play a pivotal role in designing, configuring, developing, and operating AI components that power OneCustomer. You will lead LLM/Machine Learning related topics, ranging from prompt engineering to observability and compliance, and help shape how AI augments customer understanding across the organization. You will collaborate closely with Product Owners, Engineers and external partners to bring highimpact, productionready AI functionalities to life.
Tasks and responsibilities
- Lead all LLM‑related activities, including prompt engineering, compliance‑rule prompts, user‑feedback integration, parameter tuning, data‑structure understanding, model/API version upgrades.
- Collaborate with cross-functional teams to understand the requirements and objectives for each product release, and align AI capabilities with business goals.
- Partner with Commercial, Medical, and Analytics stakeholders to translate business knowledge into effective system prompts and system behaviors, applying fewshot learning, updating domain ontologies, and continuously refining models and prompts based on stakeholder feedback. Manage vector databases and knowledge graphs supporting AI and RAG workflows.
- Implement and operate observability systems, covering monitoring, logging, performance tracking, error detection, and dashboards for real‑time traceability and business/compliance visibility of AI components.
- Manage LLM model configurations and deployments, overseeing endpoint setup, version control, integration consistency across environments, and infrastructure optimization for scale, load‑balancing, and high availability across environments.
- Collaborate with MLOps, QA, Data Science, and Product teams to ensure proper configuration, robust testing and continuous improvement of LLM-based features. Engineer compliance‑aligned prompts and system rules to reduce false positives and ensure market safety. Integrate user feedback loops to continuously refine prompts, models, personas, and ontologies.
Requirements
Required
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, Engineering, or a related field with a strong emphasis on AI and machine learning technologies.
- Proven experience in DevOps environments building AI/ML solutions, including work with LLMs, RAG pipelines, or knowledge graphs.
- Demonstrable experience in AI application development using no-code/low-code platforms or configuring off-the-shelf solutions together with external partners.
- Deep understanding of LLM architecture (transformers, embeddings, RAG, multimodal models) and experience fine‑tuning LLMs (LoRA, QLoRA, custom datasets).
- Proven knowledge of ontology management, schema design, and prompt debugging/validation.
- Hands‑on MLOps deployment, model optimization techniques (quantization, compression, inference acceleration) and CI/CD experience.
- Ability to follow and apply cutting-edge research in LLMs and model safety. Experience with observability systems, API operations, and scalable cloud architectures.
- Ability to work collaboratively in cross-functional product teams and to communicate technical topics clearly to non‑technical stakeholders.
- Strong analytical, problem‑solving, and cross‑functional collaboration skills.
- Advanced English proficiency (spoken and written).