CONFERENCE PROGRAM

Please note:
Here you can find the English speaking sessions of the TDWI München 2024. You can find all conference sessions, including the German speaking ones, here.

Nach Tracks filtern
Alle ausklappen
  • Dienstag
    11.06.
  • Mittwoch
    12.06.
  • Donnerstag
    13.06.
08:30 - 09:30
Pause
Frühstück & Registrierung/Breakfast & Registration
Frühstück & Registrierung/Breakfast & Registration

10:40 - 11:25
Di 1.1
Augmented Analytics – the path to become insight-driven
Augmented Analytics – the path to become insight-driven

Embarking on analytics transformation reveals a pivotal challenge: not all can be data-driven. Thus, we shift to Augmented Analytics, seamlessly integrating insights into workflows, bypassing the need for universal data literacy. This approach, elevates decision-making quality, operational efficiency and adoption, highlighting the journey towards a culture of insight-driven innovation. We'll explore the essence of Augmented Analytics as the cornerstone for the next analytics maturity level, highlighting the cultural and technical enablers.

Target Audience: Business Leaders/Professionals, Change/Transformation Manager, Analytics Professionals
Prerequisites: Basic knowledge: analytics transformation, genAI, data-driven culture, roles in transformation
Level: Advanced

Extended Abstract:
In our journey to transform analytics, we have come to a critical realization: Not everyone can and will be data-driven, because the reality and impact of data literacy is limited. Our current strategies need to evolve. Enter augmented analytics, a promising approach that will redefine the way we deal with insights. By integrating analytics directly into organizational workflows, we not only bypass the need for universal data literacy, but also significantly improve insight-driven decision making. This talk will explain how one commercial insurer is using this innovative strategy to improve decision quality, streamline operations, and foster a culture equipped to handle analytical challenges. We will explore the technical and methodological foundations and transformative enablers (the necessary roles, data liberalization, transformational mindset, and the importance of generative AI) that make augmented analytics not just a tool, but a catalyst for becoming truly insight-driven.

Willi Weber, with a background in business informatics, transitioned from software development at HDI Global natural catastrophe department to pioneering in analytics. He led the development of probabilistic models and pricing software, became head of data analytics, and architected HDI Global analytics transformation into an insight-driven company. Currently, he spearheads projects augmenting business processes through analytics and generative AI and is co-author of the book: Augmented Analytics.

Willi Weber
Willi Weber

Vortrag Teilen

11:35 - 12:20
Di 2.2
Transforming minds and culture during a tech transformation
Transforming minds and culture during a tech transformation

Our unit has the opportunity to move to the AWS Cloud with its platforms and data. The main challenges we faced were/are:

  • Getting buy-in from all colleagues
  • Dealing with complexity
  • Re-skilling
  • Giving people a sense of belonging and psychological safety in a complex, hybrid and changing environment.

Find out more about how we dealt with these challenges, using lots of practical hacks, and the newest findings of Neuroscience. You might be surprised which measures worked best and also by how we had huge positive impact with an average budget

Target Audience: All colleagues working in Data Analytics and AI, presentation being specially interesting to leadership roles, such as Tech Leads, Scrum Masters, Product Owners, Product Managers, Tribe Chiefs etc.
Prerequisites: Active listening skills :) Otherwise none.
Level: Basic

Extended Abstract:
The session will give deeper insights on how we deal with:

  • Hiring in a market of skill shortage
  • Make retention/reduce fluctuation
  • Make people feel that they belong
  • How we help people to develop/give them attractive perspectives
  • How we move from a good to a great place to work (speaking in a metaphor, our vision is to become like the Maledives, so that everyone wants to go there and misses they place if/when they leave and want to come back)
  • Leadership/Agile Working
  • If time allows, insights on concreate measures on mental health and psychological safety can be given.

For a first impression on our people and culture please watch this video, which we produced ourselves: youtu.be/FMA4dk7elD0

Additionally here you see use cases with data provided by Swisscom: www.swisscom.ch/de/business/enterprise/angebot/platforms-applications/data-driven-business/mobility-insights-data.html

Marianne Temerowski is the People Lead of Data Services, at Data Analytics & AI at Swisscom and leads 114 Engineers, Scrum Masters and Product Owners (internals and externals).
She has

  • worked for about 15 years in the information and communications technology industry for companies such as SAP, Deutsche Telekom and Swisscom.
  • successfully implemented transformations of organizations, technologies, and products on several occasions.
  • many years' experience in leading people and over EUR 60 million in profit and loss responsibility.
  • studied Psychology and combined it with economics and law. She likes to apply the newest findings from Neuroscience at work.
  • lived in 7 countries and is half Chilean, half German. LinkedIn: www.linkedin.com/in/marianne-temerowski/
Marianne Temerowski
Marianne Temerowski
Vortrag: Di 2.2

Vortrag Teilen

11:35 - 12:20
Di 5.2
Evolution of Modern Data Architecture: A Practical Journey
Evolution of Modern Data Architecture: A Practical Journey

Real-world experience navigating a modern data architecture landscape. Thomas Mager will reflect on the initial motivations that sparked this journey, the structure of his contemporary data architecture, the value he could generate, and the obstacles he faced along the way. Additionally, he will offer valuable insights into his current and future endeavors, incl. leveraging SaaS, advancing AI initiatives, and rapidly developing new regulatory reports, all facilitated by the robust framework of modern data architecture based on data virtualization.

Target Audience: Data Architect, Data Engineer, Project Leader, Decision Makers,...
Prerequisites: Basic knowledge
Level: Advanced

Extended Abstract:
In this presentation, Thomas Mager will share his real-world experience navigating a modern data architecture landscape over the past five years. He will reflect on the initial motivations that sparked this journey, the structure of his contemporary data architecture, the value he could generate, and the obstacles he faced along the way. Additionally, Thomas will offer valuable insights into his current and future endeavors, including leveraging SaaS, advancing AI initiatives, and rapidly developing new regulatory reports, all facilitated by the robust framework of modern data architecture with data virtualization.

The main focus areas of this presentation will be:

  • Integrating diverse data management techniques, such as data virtualization and ELT, into a unified platform.
  • Developing a core business logic layer tailored for data-heavy, IT-centric applications.
  • Empowering and skilling 'Data Citizens' to effectively utilize this data architecture.
  • Facilitating both current and prospective use cases through this architecture.

Thomas Mager is Head of Data and Analytics Platforms at Partner Reinsurance, a global multi-line reinsurance company. He joined PartnerRe in 2008 after having worked in data management functions at Credit Suisse and UBS. With his team, he builds the worldwide data platform supporting all key business areas. Building an agile truly cloud-native environment is a key driver for him and his team.

Thomas Mager
Thomas Mager
Vortrag: Di 5.2

Vortrag Teilen

12:20 - 13:50
Pause
Mittagessen & Ausstellung/Lunch & Exhibition
Mittagessen & Ausstellung/Lunch & Exhibition

14:30 - 15:30
Di 5.3
Data Architecture Evolution and the Impact on Analytics
Data Architecture Evolution and the Impact on Analytics

This session looks at how adoption of open table formats by data warehouse database management vendors and advances in SQL are making it possible to merge siloed analytical systems into a new federated data architecture supporting multiple analytical workloads.

Target Audience: Data architect, enterprise architect, CDO, data engineer
Prerequisites: Basic understanding of data architecture & databases
Level: Advanced

Extended Abstract:
In the last 12-18 months we have seen many different architectures emerge from many different vendors who claim to be offering 'the modern data architecture solution' for the data-driven enterprise. These range from streaming data platforms to data lakes, to cloud data warehouses supporting structured, semi-structured and unstructured data, cloud data warehouses supporting external tables and federated query processing, lakehouses, data fabric, and federated query platforms offering virtual views of data and virtual data products on data in data lakes and lakehouses. In addition, all of these vendor architectures are claiming to support the building of data products in a data mesh. It's not surprising therefore, that customers are confused as to which option to choose.  

However, in 2023, key changes have emerged including much broader support for open table formats such as Apache Iceberg, Apache Hudi and Delta Lake in many other vendor data platforms. In addition, we have seen significant new milestones in extending the ISO SQL Standard to support new kinds of analytics in general purpose SQL. Also, AI has also advanced to work across any type of data. 

The key question is what does this all mean for data management? What is the impact of this on analytical data platforms and what does it mean for customers? What opportunities does this evolution open up for tools vendors whose data foundation is reliant on other vendor database management systems and data platforms? This session looks at this evolution and helps vendors realise the potential of what's now possible and how they can exploit it for competitive advantage.

  • The demand for data and AI
  • The need for a data foundation to underpin data and AI initiatives
  • The emergence of data mesh and data products
  • The challenge of a distributed data estate
  • Data fabric and how can they help build data products
  • Data architecture options for building data products
  • The impact of open table formats and query language extensions on architecture modernisation
  • Is the convergence of analytical workloads possible?

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
Vortrag: Di 5.3

Vortrag Teilen

15:30 - 16:00
Pause
Kaffee & Ausstellung/Coffee & Exhibition
Kaffee & Ausstellung/Coffee & Exhibition

16:00 - 16:45
Di 2.4
Don't focus on the AI hype – focus on value with ChatGPT
Don't focus on the AI hype – focus on value with ChatGPT

Mit mehr als 1000 Nutzern auf der Cloud-Datenplattform muss DKV Mobility den nächsten Evolutionsschritt zur Data Driven Company gehen.

DKV Mobility nutzt Frosty von Snowflake, um Business Value für Non-Developer zu generieren, statt dem AI-Hype zu folgen.

Der Use Case erläutert den Why/How/What-Ansatz von DKV Mobility auf dem Weg zur Implementierung von Business-Modellen, die Generative AI enthalten. Abschließend wird auf die Do's & Don'ts eingegangen.

Zielpublikum: Entscheider, Interessierte an Generative AI
Voraussetzungen: keine
Schwierigkeitsgrad: Basic

Extended Abstract:
Mit einer voll ausgereiften Cloud-Datenplattform mit mehr als 1.000 Nutzern muss DKV Mobility den nächsten Evolutionsschritt hin zu einem datengesteuerten Unternehmen gehen.

Während sich andere Unternehmen auf den ChatGPT-Hype konzentrieren, nutzt DKV Mobility Frosty von Snowflake, um einen echten Business Value für Non-Developer im Unternehmen zu generieren.

Mit der richtigen Kombination aus Cloud-Datenplattform und Data-Governance-Konzept eröffnet Snowflake ein völlig neues Feld für die Nutzung definierter Business-Daten zur Generierung von Business Value. Der Use Case erläutert den Why/How/What-Ansatz von DKV Mobility auf dem Weg zur Implementierung von Business-Modellen, die Generative AI enthalten. Abschließend wird auf die Do's & Don'ts eingegangen.

Dr. Sönke Iwersen verantwortet seit mehr 15 Jahren Data & Analytics-Organisationen in verschiedenen Industrien (u.a. Telefónica, Handelsblatt, XING, Fitness First, HRS). Schwerpunkte sind die Entwicklung von Digitalisierungs- und Datenstrategien und deren Operationalisierung mit cloudbasierten analytischen Plattformen und ML/AI-Lösungen. Er präsentiert seine innovativen Ergebnisse regelmäßig auf nationalen und internationalen Konferenzen.

Mehr Inhalte dieses Speakers? Schaut doch mal bei sigs.de vorbei: https://www.sigs.de/autor/soenke.iwersen

Sönke Iwersen
Sönke Iwersen
Vortrag: Di 2.4

Vortrag Teilen

16:00 - 16:45
Di 5.4
Consumer-Driven Contract Testing for Data Products
Consumer-Driven Contract Testing for Data Products

Data Mesh is a decentralized approach to enterprise data management. A Data Mesh consists of Data Products, which can be composed to form higher-order Data Products. In order for a Data Mesh to scale, this composition needs to be safe and efficient, which calls for automated testing. In the Microservices architecture, scalably testing the interaction between services is sometimes achieved by an approach called Consumer-Driven Contract Testing. This session explores how this approach can be applied to the automated testing of Data Products.

Target Audience: Data Engineers, Data Scientists
Prerequisites: Basic knowledge of key concepts of Data Mesh, such as Data Products, as well as basic knowledge of Microservices architectural practices
Level: Advanced

Arif Wider is a professor of software engineering at HTW Berlin and a fellow technology consultant with Thoughtworks Germany, where he served as Head of Data & AI before moving back to academia. As a vital part of research, teaching, and consulting, he is passionate about distilling and distributing great ideas and concepts that emerge in the software engineering community. He is a frequent speaker at conferences, book author, and industry expert on topics around Data Mesh.

Arif Wider
Arif Wider
Vortrag: Di 5.4

Vortrag Teilen

16:45 - 17:15
Pause
Kaffee & Ausstellung/Coffee & Exhibition
Kaffee & Ausstellung/Coffee & Exhibition

17:15 - 18:00
Di 2.5
Building an AI Center of Excellence at Austrian Power Grid
Building an AI Center of Excellence at Austrian Power Grid

In this presentation, Pascal will provide a comprehensive overview of the foundational strategies deployed in establishing the AI Center of Excellence at APG. He will give practical insights into the many challenges and the strategic solutions implemented to ensure the Center's success. Further, the session will delve into the intricate interdependencies among AI, analytics, coding, and data excellence - critical components that surface during the development of robust AI and data capabilities.

Target Audience: Strategists, project/program managers, corporate development, research and innovation, CIO/CEO/CTOs, lead data scientists, managers
Prerequisites: Basic knowledge of AI, analytics and data management are a plus, but not necessary
Level: Basic

Pascal Plank is the Head of the AI Center of Excellence at Austrian Power Grid, where he leads the Data Driven Utility Program. He drives company-wide data and AI projects to shape the digital grid of the future. Before APG, Pascal developed key AI strategies and revamped digital innovation management at ÖBB. In his spare time, he teaches machine learning at the University of Applied Sciences Technikum Vienna, imparting practical insights to aspiring data science students.

Pascal Plank
Pascal Plank
Vortrag: Di 2.5

Vortrag Teilen

19:15 - 22:00
Welcome
Data and Drinks
Data and Drinks

Genieße kühle Drinks und kleine Snacks, während du wertvolle Kontakte knüpfst und den ersten Messetag revue passieren lässt!

Vortrag: Welcome

Vortrag Teilen

08:00 - 09:00
Pause
Frühstück & Registrierung/Breakfast & Registration
Frühstück & Registrierung/Breakfast & Registration

09:00 - 10:00
Mi 1.1
Creating Data Transparency with the Support of LLMs and AI
Creating Data Transparency with the Support of LLMs and AI

Data Transparency is a basic need of every data worker. It is crucial to find the right data in a limited amount of time to keep the time-to-market of data and analytical products short. However, documenting and classifying data manually can be cumbersome due to the vast amount of data. In this session, we present the approach MediaMarktSaturn has taken to use LLMs and other AI models in combination with a data catalog to establish a high level of data transparency in a semi-automated way.

Target Audience: Chief Data Officers, Data Governance Managers, Data Strategists, Data Engineers, Data Scientists, Subject Matter Experts
Prerequisites: Basic knowledge of Data Catalogs and LLMs
Level: Basic

Extended Abstract:
Data Transparency is a basic need of every data worker. It is crucial to find the right data in a limited amount of time to keep the time-to-market of data and analytical products short. But with the ever-growing amount of data, it gets more and more difficult to keep up a suitable level of data transparency. In this session, we present the approach MediaMarktSaturn has taken to use LLMs and Open Source Data Catalogs to establish a high level of data transparency in a semi-automated way. With this approach, it is possible to keep the manual work at a suitable level and manage data transparency in an efficient way. Moreover, we will outline how this approach is integrated into the overall Data Governance and Data Architecture.

Dieter Berwald is Competency Lead and Product Owner at MediaMarktSaturn Technology, where he focuses on advanced analytics and data catalog applications. He currently leads projects aimed at entity recognition in large cloud data lakes, managing data access, and leveraging metadata for enhanced data utilization.

Engin Özsoy is Senior Technical Program Manager at MediaMarktSaturn Technology.

Dr. Christian Fürber ist Gründer und Geschäftsführer der Information Quality Institute GmbH (iqinstitute.de), einem spezialisierten Beratungsunternehmen für Data-Excellence- und Data-Management-Lösungen. Vor der Gründung von IQI im Jahr 2012 war er in verschiedenen Positionen im Datenmanagement der Bundeswehr tätig, wo er eine der ersten Data-Governance-Initiativen in Europa konzipierte und umsetzte. Christian und sein Team haben diverse Datenprojekte und -strategien für bekannte Unternehmen verschiedener Branchen erfolgreich umgesetzt und ihnen dabei geholfen, erheblichen Mehrwert aus ihren Daten zu ziehen. Neben seiner Tätigkeit bei IQI ist Christian auch Autor und Redner auf Konferenzen und organisiert den TDWI-Themenzirkel 'Data Strategy & Data Governance'.

Dieter Berwald, Engin Özsoy, Christian Fürber
Dieter Berwald, Engin Özsoy, Christian Fürber

Vortrag Teilen

09:00 - 12:10
Mi 3.1
Data Architecture to Support Analytics & Data Science
Data Architecture to Support Analytics & Data Science

Supporting analytics and data science in an enterprise involves more than installing open source or using cloud services. Too often the focus is on technology when it should be on data. The goal is to build multi-purpose infrastructure that can support both past uses and new requirements. This session discusses architecture principles, design assumptions, and the data architecture and data governance needed to build good infrastructure.

Target Audience: BI and analytics leaders and managers; data architects; architects, designers, and implementers; anyone with data management responsibilities who is challenged by recent changes in the data landscape
Prerequisites: A basic understanding of how data is used in organizations, applications and platforms in the data ecosystem, data management concepts
Level: Advanced

Extended Abstract:
The focus in our market has been on acquiring technology, and that ignores the more important part: the landscape within which this technology exists and the data architecture that lies at its core. If one expects longevity from a platform then it should be a designed rather than accidental architecture. 

Architecture is more than just software. It starts with uses, and includes the data, technology, methods of building and maintaining, governance, and organization of people. What are design principles that lead to good design and data architecture? What assumptions limit older approaches? How can one modernize an existing data environment? How will this affect data management? This session will help you answer these questions.

Outline - Topics covered:    

  • A brief history of data infrastructure and past design assumptions
  • Categories of data and data use in organizations
  • Differences between BI, analysis, and data science workloads
  • Data architecture
  • Data management and governance processes
  • Tradeoffs and antipatterns

Mark Madsen works on the use of data and analytics for decision-making and organizational systems. Mark worked for the past 25 years in the field of analytics and decision support, starting with AI at the University of Pittsburgh and robotics at Carnegie Mellon University.

Mark Madsen
Mark Madsen
Vortrag: Mi 3.1

Vortrag Teilen

09:00 - 10:00
Mi 4.1
Rocking the World of Airline Operations using Data, AI & OR
Rocking the World of Airline Operations using Data, AI & OR

Learn how Lufthansa Group is leveraging AI in the sky - and how you could apply the same concepts to your business. Lufthansa Group and Google Cloud developed a cloud native application that pulls data from operational systems to the cloud for analysis by artificial intelligence and operations research for optimizing flight operations. It balances crew availability, passenger demand, aircraft maintenance status etc. Recommendations are given to Operations Control for final checks and implementation.

Target Audience: People with interest in how to apply data driven optimization of their businesses. Data Engineers, Project Leads, and Decision Makers
Prerequisites: Interest in optimizing operational processes using data. A basic understanding of managing data coming from a diverse set of operational applications
Level: Advanced

Extended Abstract:
Imagine you are an airline OpsController responsible for running the operations of an airline and you just learned that an aircraft that was supposed to leave for Frankfurt in 30 minutes has to stay on the ground longer due to a minor repair. What do you do now? Passengers have booked connecting flights from Frankfurt, otherwise hotel accommodations might become necessary. Wait for the repair and possibly delay connecting flights as well? Is such a delay okay for the crews or will some exceed their permitted working hours? Is a replacement aircraft available? If so, is there a crew for it? And what about...  

Such a small issue can trigger a cascade of thighs to consider and this can quickly exceed human capabilities in their complexity. There are more potential matches of chess possible than protons in the universe. And airline operations is an even harder problem. Highly experienced people can solve situations to a certain extent based on their wealth of experience. But wouldn't it be better if optimal decisions across multiple business processes (which still can affect punctuality and passengers) were proposed to the person in the OpsControl Center with the support of an AI?

Lufthansa Group has addressed this problem and, together with Google Cloud, created a data platform for its operational data from core business processes. This data foundation is used as input for AI models to calculate future alternative scenarios from the airline's perspective. These are calculated using optimizers that Google contributes to the project. At runtime, in addition to the operational data, a set of airline-specific business rules together with a cost function, which is also the responsibility of the airline, are given to the so-called 'Solver'.

When a request is made, several scenarios are calculated in parallel. These differ in their respective configurations and their results are summarized and the best scenarios are displayed to the person in OpsControl as a proposal for evaluation via a specially created user interface. The person in OpsControl can then transfer the necessary changes back to the operational systems for implementation in the physical world. It is very important to note that today and for the foreseeable future a human being has the final say regarding the use of AI-generated proposals.

The application presented here optimizes across different business units simultaneously and avoids the common occurrence of local optimizations that contradict each other and may have to be resolved manually. The solution is in use at Swiss Intl. Airlines, a Lufthansa Group airline. After 18 months, the use of this technology resulted in CO2 emission savings of 7,400 tons per year through reduced fuel consumption, which is equivalent to approximately 18 Boeing 777 flights between Zurich and New York City. These savings will increase significantly when the application is rolled out to the other airlines in the Lufthansa Group.

In this presentation, Christian Most and Andreas Ribbrock will explain the use case. They will also show how the Lufthansa Group uses the cloud as a real-time data integration platform to prepare data from operational systems, each of which does exactly what it is supposed to do but is not designed to process data from other systems. The challenges posed by this approach will also be described. As well as the opportunities offered by 'serverless' technologies, since the project team does not have to worry about infrastructure.

Christian Most is a technology leader at Deutsche Lufthansa, focusing on innovation and the application of cutting-edge technologies in the airline industry. With a deep understanding of cloud computing, artificial intelligence, and data analytics, he drives Lufthansa's digital transformation initiatives.

Christian Most, Andreas Ribbrock
Christian Most, Andreas Ribbrock
Vortrag: Mi 4.1

Vortrag Teilen

10:00 - 10:30
Pause
Kaffee & Ausstellung/Coffee & Exhibition
Kaffee & Ausstellung/Coffee & Exhibition

10:30 - 11:15
Mi 8.2
Align with Willibald towards an Ensemble Logical Datamodel
Align with Willibald towards an Ensemble Logical Datamodel

The ELM approach is a new way to get the business requirements analysis for your data warehouse and data architecture and a perfect fit to model your Data Vault.

This presentation will mimic the 6 steps needed to guide Samen- und Pflanzenhandel Willibald from their business case towards an Ensemble Logical Model – the ELM approach. Each of the 6 steps can be seen as a small part of the combined journey of business and IT as described in the ELM approach. When the Willibald case came available, we applied our ELM approach and, apart from mimicking workshops, used all our templates as a guide towards a Data Vault model based upon the Willibald business and not upon the available source systems.   

The ELM approach consists of a series of 2-4 workshops with the business people and using templates to discover the Core Business Concepts – that is what is important for the organization – and their relationships. All of the 6 steps will also communicate and document what is important for the organization from a non-technical perspective and help to understand the Ensemble Logical model fit to implement the Data Vault. 

If you ever asked yourself How can I get the business user to actively involved? How do I get requirements about which data should be integrated? And how should it be done from a business perspective?

Well, you will find answers here.

Remco Broekmans is the vice president of international programs for Genesee Academy. He works in Business Intelligence and Enterprise Data Warehousing as a trainer, facilitator, advisor and speaker with a focus on modeling and architecture including Ensemble and Data Vault modeling. He works internationally and is based in the Netherlands.
Specialties: Information Management and Modeling, Ensemble Modeling, Data Vault Modeling, Agile Data Warehousing, Education and Business Development

Remco Broekmans
Remco Broekmans
Vortrag: Mi 8.2

Vortrag Teilen

11:25 - 12:10
Mi 6.3
Governance for Low-Code & Self-Service-Platforms @ Müller
Governance for Low-Code & Self-Service-Platforms @ Müller

The successful establishment of low-code and self-service platforms in enterprises is not a guaranteed. Seamless collaboration among departments, management, IT, and security is crucial. The Müller Service Group has effectively tackled this challenge through the implementation of a governance framework. In this session, we will share best practices and insights from our journey towards successful governance. We will use the Microsoft Power Platform as a case study within the presentation as an example of a low-code platform at Müller.

Target Audience: Decision Makers, project leaders, IT-managers, IT-architects, data analysts, Chief Data Officers (CDOs), Chief Information Officers (CIOs), responsible individuals in Data & Analytics
Prerequisites: Basic Knowledge
Level: Basic

Extended Abstract:
In this session, we will first introduce the foundational framework crucial for the successful operation of a low-code platform, particularly within the context of the Microsoft Power Platform. Subsequently, we will delve into a detailed examination of opportunities and risks associated with operating such a platform, emphasizing the strategic significance of governance.

An additional emphasis during the presentation highlights the central role of IT in operating low-code and self-service platforms. Managers and domain experts will gain valuable insights with practical tips on successfully convincing and engaging their IT in the benefits of the low-code concept. This session provides a comprehensive overview of the potentials and challenges of low-code and self-service platforms. 

Join us as we explore the transformative potential of this governance approach for Low-Code & Self-Service-Platforms.

For over 5 years, Deepak Sahu has been a software engineer at Mueller Service GmbH. With 13 years of experience, he crafts robust code for critical applications. As a Microsoft-certified expert, his focus spans Manufacturing Systems, Warehouse Automation, and financial instruments. Architecting and leading 'The Replay Software' showcases his prowess in optimizing Agile workflows. Proficient in ERP integration, web, desktop apps, and Azure DevOps, he excels in CI/CD processes. Establishing a Microsoft Power Platform Onboarding Center and proactive incident management underscore his commitment. His structured project management using Clarity PPM ensures quality and risk assessment.

Dominik Wuttke serves as the Principal Team Lead for Digitalization & Data Science at Marmeladenbaum GmbH. His expertise spans Predictive Analytics, Microsoft Power Platform, and Data Engineering. Having successfully delivered numerous projects in sectors such as medical technology, insurance, and energy, Dominik Wuttke combines leadership with technical prowess. As a seasoned lecturer, he shares his comprehensive knowledge at key conferences, solidifying his position as a principal team lead in the industry.

Florian Rappelt designs and develops innovative solutions in his role as Senior Power Platform Architect. With a focus on governance, he shapes efficient processes. Through training initiatives, he empowers users and establishes an active Maker community. His passion lies in maximizing the utilization of the Power Platform.

Deepak Sahu, Dominik Wuttke, Florian Rappelt
Deepak Sahu, Dominik Wuttke, Florian Rappelt
Vortrag: Mi 6.3

Vortrag Teilen

13:10 - 14:40
Pause
Mittagessen & Ausstellung/Lunch & Exhibition
Mittagessen & Ausstellung/Lunch & Exhibition

16:20 - 16:50
Pause
Kaffee & Ausstellung/Coffee & Exhibition
Kaffee & Ausstellung/Coffee & Exhibition

16:50 - 17:35
Mi 1.5
How to increase data literacy in a large corporation
How to increase data literacy in a large corporation

This presentation gives an overview on the concept of Boehringer Ingelheims Data Science Academy. The Academy aims at providing data literacy to all data users and leaders. It also provides upskilling opportunities for experts in the domain. Besides the general setup, I will give an overview on the founding of the Academy. Furthermore important learnings and our further development are shown.

Target Audience: Anyone who wants to increase data literacy in their company and learn and discuss about the setup of a Data Science Academy
Prerequisites: None
Level: Basic

2011 PhD in Physics
2012-2019 Risk manager in various positions (R+V, Wiesbaden)
2020-2021 Data Scientist (R+V, Wiesbaden)
2021-now Data Science Academy Manager (Boehringer Ingelheim, Ingelheim)

Marius Hilt
Marius Hilt

Vortrag Teilen

08:00 - 09:00
Pause
Frühstück & Registrierung/Breakfast & Registration
Frühstück & Registrierung/Breakfast & Registration

09:55 - 10:40
Do 1.2
Confluence: Joining Governance Streams to form Data Products
Confluence: Joining Governance Streams to form Data Products

How can Data Cataloguing, Modelling, DQ and other streams join forces to create business value? The speaker shares experience from a data vendor and a manufacturing business.

Target Audience: Data Professionals and decision makers with stakes in the value chain big picture
Prerequisites: Familiarity with Data Governance concepts (Catalogue, Quality, Integration etc.)
Level: Advanced

Extended Abstract:
Data Governance can contribute local optimizations to a company's value chain, such as better data discovery via a Data Catalogue, or quality-monitored and cleansed data sets. From a 30,000 ft Data Strategy view, it is even more desirable to connect the dots for business objects frequently reused among business processes and make them available as governed, quality-controlled, easily accessible Data Products.

The speaker successfully launched a Data Governance program in a company traditionally ranking metal higher than data and will share experiences on the ongoing Data Product journey:

  • Identifying scope
  • Cataloging technical metadata
  • Modeling a logical layer
  • Managing sensitive data in a hybrid architecture
  • Simplifying cross-system access

Dominik Ebeling is a CDMP-certified data and technology manager with more than ten years of international experience in start-up and enterprise contexts. He is passionate about building and developing successful teams, optimizing global processes and structures, and turning data into solutions for customer problems. In his current role as Head of Data Governance at Rolls-Royce Power Systems, Dominik is developing a long-standing manufacturing business into a data-driven organization.

Dominik Ebeling
Dominik Ebeling

Vortrag Teilen

10:40 - 11:10
Pause
Kaffee & Ausstellung/Coffee & Exhibition
Kaffee & Ausstellung/Coffee & Exhibition

11:10 - 12:10
Do 2.3
Agile AI Architectures: Azure MLOps for Dynamic Usecases
Agile AI Architectures: Azure MLOps for Dynamic Usecases

Explore the future of MLOps as we delve into building Azure ML pipelines using OOP. Discover how a generic and reusable MLOps pipeline streamlines new use case initiation. We utilize MLflow for managing the ML lifecycle and model deployments. We leverage OOP and dependency injection to build an MLOps framework, eliminating all the boilerplate and making it easy for our customers to start new use cases. Developers can reuse, inject, or utilize AutoML for training modules. This solution is an accelerator and saves significant development time.

Target Audience: Data Scientists, Engineers, DevOps and all Data Enthusiasts
Prerequisites: Basic knowledge of Data Science, Data Engineering and DevOps
Level: Advanced

Extended Abstract:
In our solution, we integrate Azure AutoML, enabling individuals without expertise in data science to effortlessly deploy state-of-the-art machine learning models onto their datasets. This facilitates the extraction of optimal models, which can subsequently be refined and customized to align with their specific requirements.

Saurabh Nawalgaria is an Experienced Cloud and Data Engineer at Synvert Data Insights specializing in Azure, Kubernetes, Spark, and SQL. Proficient in designing and implementing ETL pipelines. Expert in leading and delivering pivotal Cloud Data Platform Projects. Versatile skills extend to MLOps, AzureML, and MLFlow. Proven ability to contribute significantly to diverse cloud and data engineering initiatives, demonstrating a robust technical acumen.

Mustafa is a seasoned Data Engineer and Architect Consultant at synvert Data Insights. Specializing in Spark, Azure, Databricks, He brings extensive experience in Data and Cloud technologies to his role. With a focus on data engineering and platform projects, he leads the Azure competence cluster. His expertise extends to designing and implementing robust data pipelines, optimizing data workflows, and architecting scalable solutions in Azure environments.

Saurabh Nawalgaria, Mustafa Tok
Saurabh Nawalgaria, Mustafa Tok
Vortrag: Do 2.3

Vortrag Teilen

11:10 - 12:10
Do 4.3
Intelligent Data Engineering & Integration: A Show Case
Intelligent Data Engineering & Integration: A Show Case

Within this session colleagues from Siemens Financial Services and b.telligent present how they are building jointly an intelligence data integration framework based on Azure Services leveraging the unique advantages of concepts such as generic loading, automated schema evolution to magnify flexibility and how they fine-tune workloads, to ensure a smooth performance and to cost-efficient workload management at the same time.

Target Audience: Technical Experts in Data Warehousing, Data & Cloud Engineering.
Prerequisites: Advanced Knowledge in ETL/ELT Processes, Azure Cloud, Data Integration a/o Data Warehousing
Level: Advanced

Michael Bruckner has over 10 years of experience working in various roles and industries in the field of business intelligence, data analytics, data engineering, and data warehousing. At Siemens Financial Services, he currently leads the Data Management department within the IT department and is driving the development of an enterprise-wide, cloud-based data platform as an IT Program Manager.

Niklas Sander works as a data engineer at Siemens Financial Services GmbH in Munich. Together with colleagues, he is driving the development and expansion of a new cloud data platform. His focus is on topics such as data ingestion, enabling citizen developers, and cost-performance tuning in the Azure Synapse area.

Benjamin Naujoks Roldan works as a platform engineer at Siemens Financial Services GmbH in Munich. His focus is cloud engineering, the development of customized apps, and the seamless integration of different software components into the data platform of Siemens Financial Services.

Daniel works as a Cloud Data Warehouse Architect at Siemens Financial Services GmbH in Munich. He has many years of experience in data-intensive BI and analytics projects. As the lead data architect, he is responsible for the development and expansion of a new cloud data platform. His focus is on cloud architecture, data integration, data modeling, and automation.

Michael Bruckner, Niklas Sander, Benjamin Naujoks Roldan, Daniel Bialas, Matthias Nohl
Michael Bruckner, Niklas Sander, Benjamin Naujoks Roldan, Daniel Bialas, Matthias Nohl
Vortrag: Do 4.3

Vortrag Teilen

13:10 - 14:30
Pause
Mittagessen & Ausstellung/Lunch & Exhibition
Mittagessen & Ausstellung/Lunch & Exhibition

15:10 - 15:55
Do 2.4
Privacy-Aware Retrieval Augmented Generation
Privacy-Aware Retrieval Augmented Generation

In this talk, we delve into the emerging field of applying large language models in the industry with focus on different techniques of augmenting large language models while preserving confidentiality of data that is used for additional context.

In this talk we will explore the fundamentals of RAG, its implications for privacy-preserving data handling, and practical considerations for its implementation on a real usecase.

Target Audience: machine learning engineers, data scientists, data engineers, privacy officers, software engineers, product owners
Prerequisites: Basic knowledge about large language models
Level: Advanced

Extended Abstract:
The rapid advancement of language models has brought in unprecedented capabilities in text generation, comprehension, and interaction. However, as these models become more integrated into industry applications, the tension between leveraging vast amounts of data for improved performance and ensuring the privacy of that data has intensified. This presentation delves into the practical application of Retrieval Augmented Generation (RAG) as a novel solution to this challenge, focusing on its implementation within large language models to balance data utility with confidentiality.

Retrieval Augmented Generation (RAG) combines the generative powers of large language models with the precision of information retrieval systems. By fetching relevant information on demand and incorporating it into the generation process, RAG models can produce more accurate and contextually relevant outputs. This technique not only enhances the model's utility by broadening its knowledge base but also introduces a mechanism for controlling the exposure of sensitive data.

The core of our discussion will center on the technical underpinnings of RAG, detailing how it retrieves information and the algorithms that govern its integration with generative models. We'll examine the architecture required to facilitate this interaction, the processes for indexing and retrieving content, and the methodologies for ensuring that the retrieved data does not compromise privacy.

Furthermore, we'll present a case study demonstrating RAG's application on a real project. This example will illustrate how RAG can be tailored to specific privacy requirements and regulatory standards, showcasing its versatility and effectiveness. We will also discuss the challenges faced during these implementations.

Damir Kopljar is a Team Lead at Croz AI, a Ph.D. candidate in the field of explainable AI, and a passionate drone pilot.
He is always curious to learn more about complex problems and find the best solutions using data and machine learning. Currently, his efforts are concentrated on assisting clients in identifying opportunities to leverage AI and implement machine learning across various scenarios, as well as on establishing future-oriented ML factories grounded in MLOps best practices.
When he's not fixing broken drones, he enjoys mountain climbing.

Damir Kopljar
Damir Kopljar
Vortrag: Do 2.4

Vortrag Teilen

15:10 - 15:55
Do 4.4
Strategies for a Seamless Data Shopping Experience
Strategies for a Seamless Data Shopping Experience

In this session Marcel will talk about how organizations are realizing a data shopping process with the correct technology, organization and business processes. The session will unveil the role of each technology component from data catalogs and data marketplaces to data virtualization and data pipelining. Furthermore, the role of data governance and organizational change management will be explored and customer reference architectures will be provided to illustrate an exemplary end to end data shopping process.

Target Audience: CTO, CIO, Head of BI, Data Engineer, Data Scientist, Data Steward
Prerequisites: Basic understanding of Data Mesh (optional)
Schwierigkeitsgrad: Advanced

In my current position as the Head of Data Intelligence at Camelot, I serve as a Managing Consultant and Architect, guiding clients in the selection of optimal data management tools for their modern data architectures. My focus extends to realizing the principles of data mesh across organizational, process, and technological dimensions. I am committed to delivering comprehensive solutions and ensuring that organizations harness the full potential of their data assets. I have cultivated expertise in data integration platforms, including SAP Datasphere, Informatica Intelligent Data Management Cloud (IDMC), and Denodo.

Georg Frey is a Consultant at Camelot ITLab, specializing in Data Science, Software Development, and Cloud technologies. With a proven track record in developing and implementing data architectures, Mr. Frey excels in designing robust data governance frameworks and comprehensive data strategies. His proficiency in leveraging cutting-edge technologies and his keen understanding of industry best practices ensure efficient and effective solutions tailored to meet clients' needs. Mr. Frey fosters a data-driven culture, enabling informed decision-making and innovation within organizations.

Marcel Oenning, Georg Frey
Marcel Oenning, Georg Frey
Vortrag: Do 4.4

Vortrag Teilen

16:00 - 16:15
Wrap-Up
Wrap-Up und Verabschiedung
Wrap-Up und Verabschiedung

Track: Wrap Up
Vortrag: Wrap-Up

Vortrag Teilen

Zurück