Hinweis: Die aktuelle TDWI-Konferenz finden Sie hier!

CONFERENCE PROGRAM OF 2021

Please note:
On this site, there is only displayed the English speaking sessions of the TDWI München digital. You can find all conference sessions, including the German speaking ones, here.

The times given in the conference program of TDWI München digital correspond to Central European Time (CET).

By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.

Nach Tracks filtern
Alle ausklappen
  • Montag
    21.06.
  • Dienstag
    22.06.
  • Mittwoch
    23.06.
09:00 - 10:00
KeyMo
KEYNOTE und Eröffnung: What is work & What is human – in a superhuman future
KEYNOTE und Eröffnung: What is work & What is human – in a superhuman future

For the work we do in the long-term-vision-labs and strategy team for the Technology & Innovation board area at SAP is one of the playgrounds for the title question. These topics are good, because they generate exiting and abundant questions, that we believe should be asked by everyone.

How does the future work, the mechanics of change? What is the future of human work and our interactions with machines and each other? Is industrialization of space and 3d printed organs really a “thing” and how do you relate to these? What are the methods to innovate desirable futures? How do you structure such a vast and complex fabric of possibilities in your domain, team, business, boardroom? How long is the history of the future and how does that affect our fear and counter reactions to it in socio-political movement? Most importantly, who designs the future, that you your community will live in? 

We talk about change, but what are the mechanics and the dynamics behind it? How fast is it? What it means to be an innovator, is transforming faster than before, from classic product and service definition of the superficial to computational system design of everything, including social and political systems, deeply rooted in a space of challenges and promises between cutting edge tech and humanism. In an exponential and converging digital-fueled future, we design a relationship, a behaviour, that the product will follow.

What comes before strategy? We love to act on strategies, derive tactics, execute and operate, it is our psychological bias, we love to do what we are good at. But how do we derive a strategy? From what point of view, what narratives, what builds up a “Future fabric” that these narratives a woven from? And who does it in your enterprise? We will have, high level look at how we build a desirable future with these questions in mind, and also what we in our labs consider to be a desirable future of work, at least for the coming decade.

This exponential change is our innovation brief and the stakes are high, It is just too important to be left only to... any single team. Technology is the human evolution, the prior named HuMachine creates a playground for “Human centered“-adventure, this opens new worlds for our imagination in a time when “now” has never been so temporary. Bringing these thoughts together we need to answer the question “What is human, and what is work in a superhuman future?”

Main chapters

  1. How Science-Fiction is becoming science-fact, and... business fact, now. The narratives and examples of current futures.
  2. What values, attitudes, most of all methods and frameworks can we use to derive our
  3. What we, SAP derived as our direction for the future of work and the tech to support it from, the above 2 insights.
"I want to innovate what we call "work" out-of our lives via an empathic symbiosis between human ingenuity and machine intelligence." Martin Wezowski works as Chief Futurist for SAP’s Technology & Innovation. He is lecturing as a faculty member of Futur/IO, a European future institute and other education programs. He moved across a range of disciplines from UX, to systemic design to define innovation visions and strategies. Right now, he is on the mission to map, build and inspire a future we want to live in, more specifically, he crafts future outlooks, concepts,  products, defines and runs innovation frameworks to find out what’s next and beyond for SAP’s vast ecosystem and the future of work. 2017 he was named 1 of 100 most innovative minds in Germany as the “Software visionary” (“Handelsblatt”). Prior to joining SAP in 2013, he worked for Sony Ericsson (Sweden) as creative director later he lived in Shenzhen, China for 2 years while serving as director of the strategic UX board (Huawei). He builds on his international adventures stretching from Poland, Sweden, China to Germany, and frequently shares his passion for the future of humans and work around the world (Duke University, as EU-advisor and keynote speaker, SxSW, TEDx, Singularity-U Summit, BOMA, CES, Tech Open Air - TOA, HUB Berlin, DMEXCO, WEBIT, Ada Lovelace festival, TWIN Global, CeBIT, CES. WeAreDevelopers, etc.)
Martin Wezowski
Martin Wezowski
Track: Keynote
Vortrag: KeyMo
flag VORTRAG MERKEN

Vortrag Teilen

10:00 - 10:10
Short Break
Short Break
Short Break

10:10 - 10:50
Mo 2.1
Daily regression in an enterprise data warehouse
Daily regression in an enterprise data warehouse

Over the last few decades, ETL and specially Datawarehouse testing has been gaining quite a bit of traction. The reason for that traction? The modern enterprise's dependency on data. This dependency calls for the right testing strategy to ensure the quality and correctness of the provided data. Further on the drive to lower 'time to data' and overall costs is putting high pressure on Test specialists to increase efficiency in this area. In this presentation I want to show you our journey to daily regression in our enterprise data warehouse.

Target Audience: Testers, Test Managers, Project Leaders, Decision Makers, Data Engineers, Managers
Prerequisites: none
Level: Basic

Extended Abstract:

Over the last few decades, ETL and specially data warehouse testing has been gaining quite a bit of traction. Especially given the emergence of Agile and DevOps as top trends in the software development industry. The reason for that traction? The modern enterprise's dependency on data.

This dependency calls for the right testing strategy and execution to ensure the quality and correctness of the provided data. 

Further on the drive to lower 'time to data' and overall costs is putting high pressure on Test specialists to increase efficiency in this area.

In this presentation I want to show you our journey to daily regression in our data warehouse. 

We improved our processes and changed our testing approach. We started with a quarterly regression test which took a whole testing team of 10 people 2 months to finish. 

Now we do the same on a daily basis within our agile team.

Chapter Lead Test EDWH; Test Automation Expert; Innovation Expert
Bernhard Frauneder
Bernhard Frauneder
Vortrag: Mo 2.1
flag VORTRAG MERKEN

Vortrag Teilen

10:10 - 10:50
Mo 3.1
The need for a unified big data architecture
The need for a unified big data architecture

Big data is not the biggest change in the IT industry but data usage. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. Unfortunately, it is hard for current data delivery systems to support new styles of data usage, such as data science, real-time data streaming analytics, and managing heavy data ingestion loads. This session discusses a real big-data-ready data architecture.

Target Audience:
Data architects, data warehouse designers, IT architects, analysts, enterprise architects, solutions architects
Prerequisites: Some understanding of data architectures
Level: Advanced

Extended Abstract:

It is not big data that is the biggest change in the IT industry, it is data usage that has changed most drastically the last ten years. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. This may mean that more data needs to be collected resulting in big data systems, but the goal remains to do more with data. Unfortunately, current data delivery systems don't have the right characteristics for supporting the requirements of big data systems, such as supporting data science, handling massive data streaming workloads, managing heavy data ingestion loads, and so on. Additionally, many organizations that are developing big data systems, are developing isolated data delivery systems, such as an isolated data lake, an isolated data streaming system, an isolated data services system, and so on. This avalanche of data delivery systems is not beneficial to the organization. The wheel is invented over and over again. It's time to design real big-data-ready data architectures. This session discusses such an architecture, including the supporting technologies, pros and cons.

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
Vortrag: Mo 3.1
flag VORTRAG MERKEN

Vortrag Teilen

10:50 - 11:20
Break
Break
Break

11:20 - 12:30
Mo 1.2
Governing big data and analytics
Governing big data and analytics

Data volumes are exploding, and companies are striving to use advanced analytics for more data-driven insights and self-learning systems. Enabling scalable data onboarding and analytics delivery processes with little human intervention but strong governance is key to extract value from Big Data and Analytics successfully. The CC CDQ has developed a framework for governing Big Data and Analytics in close collaboration with industry partners. The framework supports practitioners to setup processes, roles and tools to scale analytics use cases.

Target Audience: Analytics Manager, Data Manager, Project Leader
Prerequisites: Basic knowledge and some experience in analytics or data management
Level: Basic

Martin Fadler is researcher in the Competence Center Corporate Data Quality (CC CDQ) and doctoral candidate at University of Lausanne. Before joining CC CDQ, he worked for several years as a Data Scientist in a venture of Deutsche Telekom. His research interests relate to data management in the context of big data and analytics, and on how artificial intelligence, in particular machine learning, can improve data management.
Christine Legner ist Professorin an der Universität Lausanne und leitet das Competence Center Corporate Data Quality (CC CDQ), in dem sie und ihr Team gemeinsam mit 20 europäischen Unternehmen innovative Konzepte und Lösungen für das Datenmanagement erarbeiten. Sie verfügt über langjährige praktische und akademische Erfahrung und ist Autorin aktueller Studien zu Datenstrategien und Datenkatalogen.
Martin Fadler, Christine Legner
Martin Fadler, Christine Legner
Vortrag: Mo 1.2
flag VORTRAG MERKEN

Vortrag Teilen

11:20 - 12:30
Mo 2.2
Distributed databases in a microservice environment
Distributed databases in a microservice environment

Using distributed databases across various microservices will be explained based on a research project example. The presentation elaborates of how to achieve data consistency across microservices, how to communicate using message brokers, how to scale the microservices and achieve high application availability. Container virtualization and orchestration are technology basis for the solution. The project example shows of how to build an Artificial Intelligence (AI) solution – as a service!

Target Audience: Data Engineer, Data Scientist, Project leader, Architects
Prerequisites: Common understanding of database technology and architecture
Level: Advanced

Extended Abstract:

Developing microservices have various advantages compared to traditional monolith approach. Loose coupled microservices enables smaller teams to develop independently and using CI/CD at a daily basis. Microservices can be scaled independently using different database technologies optimised for particular use cases. The microservices are fault isolated so particular failures will not result in an overall outage.

But how to ensure data consistency for all distributed databases across the microservices? How to react at particular failures? And how to interact and communicate between services?

A research project for intelligent energy analysis will be presented. The solution realizes an Artificial Intelligence (AI) solution analyzing streaming data near real time to ensure energy savings in a production environment. The presentation will explain the steps necessary to establish a microservice environment for Artificial Intelligence and Machine Learning. Central logging guarantees operations monitoring across all microservices. Dashboards presents the results to technical staff monitoring the Machine Learning libraries as well as to the process owner of the domain, e.g. the operations manager or insurance agent. The solution is based on Docker as container virtualisation technology and Kubernetes for container orchestration. In the research project, the solution is realized on premise, but can be easily deployed in the cloud.

The speaker Matthias Braun studied Mathematics and Physics. He works as consultant for Business Intelligence and Business Analytics for more than 10 years. Matthias Braun started in 2015 at Trevisto AG, a consultant company for digitalisation, Business Intelligence and AI. At Trevisto, he is responsible for the development of an AI framework including the full project life cycle starting from a proof-of-concept phase until operation and monitoring of the solution. He has worked in various data scientist projects.
Matthias Braun
Matthias Braun
Vortrag: Mo 2.2
flag VORTRAG MERKEN

Vortrag Teilen

11:20 - 12:30
Mo 3.2
Data virtualization in real life projects
Data virtualization in real life projects

Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, democratizing data, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it effectively and efficiently. In this session lessons learned, tips and tricks, do's and don'ts, and guidelines are discussed. And what are the biggest pitfalls? In short, the expertise gathered from numerous projects is discussed.

Target Audience: Data architects, data warehouse designers, IT architects, analysts, solutions architects
Prerequisites: Some understanding of database technology
Level: Advanced

Extended Abstract:
Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it properly. In this practical session the lessons learned are discussed. Tips and tricks, do's and don'ts, and guidelines are discussed. In addition, answers are given to questions, such as: How does data virtualization work and what are the latest developments? What are the reasons to choose data virtualization? How to make the right choices in architecture and technology? How does data virtualization in your application landscape? Where and how do you start with the implementation? And what are the biggest pitfalls in implementation? In short, the expertise gathered from numerous projects is discussed.

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
Vortrag: Mo 3.2
flag VORTRAG MERKEN

Vortrag Teilen

12:30 - 14:00
Lunch Break
Lunch Break
Lunch Break

14:00 - 15:30
Mo 3.3
DataOps: using data fabric and a data catalog for continuous development of data assets
DataOps: using data fabric and a data catalog for continuous development of data assets

As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.

Target Audience: CDO, Data Engineer, CIO, Data Scientist, Data Architect, Enterprise Architect
Prerequisites: Understanding of data integration software and development using a CI/CD DevOps approach
Level: Advanced

Extended Abstract:
As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.

  • The increasingly complex data landscape
  • Challenges in governing and integrating data in a distributed multi-cloud environment
  • What is data fabric, why it is important and what technologies are on the market?
  • What is a data catalog and what products are on the market?
  • Why organizations need these technologies?
  • How do data catalogs and data fabric work together?
  • A closer look at data fabric capabilities
  • Defining common data entities in a catalog business glossary
  • Using a catalog for automatic data discovery
  • A DataOps CI/CD Approach to building reusable trusted data assets in using data fabric

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
Vortrag: Mo 3.3
flag VORTRAG MERKEN

Vortrag Teilen

14:40 - 14:50
Short Break
Short Break
Short Break

14:50 - 15:30
Mo 1.4
Insights to action: ABB's journey to data driven decisions
Insights to action: ABB's journey to data driven decisions

How do you enable digital transformation and create value through analytics?

Building a global analytics function across a diverse application landscape incl. SAP and multiple data sources provides many challenges. See how ABB successfully managed this journey and now enjoys the benefits of operational analytics globally, a shift in mindsets and a more data driven way of working.

You will also discover the impact of key technologies used (Change-Data-Capture, Automation & AI) and see real examples of specific analytics deployed in the business.

Target Audience:
Business analysts, decision makers, C-Level
Prerequisites: Basic Knowledge
Level: Basic

Extended Abstract:

How do you enable digital transformation and create value through analytics?

This session tells the story of building a global analytics function in an environment with a diverse set of applications including a complex SAP system landscape and many data sources. The speaker will talk about some of the challenges in the journey but also the success in deploying operational analytics globally, shifting mindsets and help transition to a more digital/data driven way of working.

The audience will also discover the impact of key technologies used (e.g.: Change-Data-Capture, Visualization, Automation and AI) and how these helped to create value and drive revenue increase for ABB, using real examples of specific analytics deployed in the business.

Mircea Zamfir is a data analytics advocate at ABB, a pioneering technology leader in the field of energy & automation. He transforms traditional processes and drives value creation through digital enablement. Mircea has 14+ years of leading experience in technology, analytics and operations, combining business knowledge with technology to drive process improvements and cash recoveries.
Feridun Ozmen is an Internal Auditor - Business Analyst at ABB Asea Brown Boveri Ltd.
Mircea Zamfir, Feridun Ozmen
Mircea Zamfir, Feridun Ozmen
Vortrag: Mo 1.4
flag VORTRAG MERKEN

Vortrag Teilen

14:50 - 15:30
Mo 2.4
Building a data culture with people and ideas that matter
Building a data culture with people and ideas that matter

A successful data culture brings together people from across the business to collaborate, share and learn from each other about data, analytics and the business value data can hold. In this session I share my suggestions for bringing together your most talented data people so your organization can gain more value from their skills and the data assets you invested in.

Come with an open mind and leave with tips, tricks and ideas to get started straight away.

Target Audience: Data practitioners, analysts, CoE leaders, Management
Prerequisites: basic understanding of how businesses use data
Level: Basic

Eva Murray ist Snowflake’s Lead Evangelist für die EMEA Region und anerkannte Expertin im Bereich Datenvisualisierung. Sie hat vier Bücher veröffentlicht, unter anderem zu den Themen wie man Data Communities aufbaut und wie man die Investitionen in Dateninfrastruktur im Profi-Fussball optimiert.
Bei Snowflake hilft Eva Murray Unternehmen, ihre Mitarbeiter für Daten und Technologien zu begeistern und dadurch Aspekte der Datenstrategie, wie z.B. Datenkultur, Datendemokratisierung und Datenkompetenzen zu verwirklichen.
Eva Murray ist seit vielen Jahren aktiver Teil der Datenanalyse- und Data Visualisierungs-Community und engagiert sich für Frauen in Data- und Tech-Berufen.

Eva Murray
Eva Murray
Vortrag: Mo 2.4
flag VORTRAG MERKEN

Vortrag Teilen

15:30 - 16:00
Break
Break
Break

16:00 - 17:10
Mo 3.5
No more metadata, let's talk context and meaning
No more metadata, let's talk context and meaning

Metadata is a long-time favourite at conferences. However, most attempts to implement real metadata solutions have stumbled. Recent data quality and integrity issues, particularly in data lakes, have brought the topic to the fore again, often under the guise of data catalogues. But the focus remains largely technological. Only by reframing metadata as context-setting information and positioning it in a model of information, knowledge, and meaning for the business can we successfully implement the topic formally known as metadata.

Target Audience:
- Enterprise-, Systems-, Solutions and Data Architects
- Systems-, Strategy and Business Intelligence Managers
- Data Warehouse and Data Lake Systems Designers and Developers
- Data and Database Administrators
- Tech-Savvy Business Analysts

Prerequisites: Basic knowledge of data management principles. Experience of designing data warehouses or lakes would be useful

Level: Basic

Extended Abstract:

Metadata has – quite correctly – been at the heart of data warehouse thinking for three decades. It's the sort of cross-functional and overarching topic that excites data architects and data management professionals. Why then can so few businesses claim success in implementing it? The real question is: implementing what?

This session answers that question by first refining our understanding of metadata, which has been made overly technical over the years and more recently been misappropriated by surveillance-driven organisations. With this new understanding, we can reposition metadata not as a standalone topic but as a vital and contiguous subset of the business information that is central to digital transformation. With this positioning, 'metadata implementation' will become a successful – but largely invisible – component of information delivery projects.

Dr. Barry Devlin is a founder of data warehousing, defining its first architecture in 1985. He is respected worldwide as a visionary on BI and big data, publishing the seminal book 'Business unIntelligence' in 2013. With 30+ years' IT experience in IBM and 9sight, he offers thought-leadership and consulting to BI and big data buyers and vendors. He speaks and writes on all aspects of information.
Barry Devlin
Barry Devlin
Vortrag: Mo 3.5
flag VORTRAG MERKEN

Vortrag Teilen

16:00 - 17:10
Mo 4.5
AI factory and metadata driven data ingestion at ERGO
AI factory and metadata driven data ingestion at ERGO

In this session, the ERGO Group, one of Europe's leading insurance companies, presents their AI Factory for development and operationalization of AI models. The session gives an architectural overview of the AI Factory's components. Furthermore, it explains how cloud-native technologies like Openshift and AWS Cloud Services aided in moving towards a data driven organization. A deep dive into the AI Factory's data ingestion process shows how metadata-driven data ingestion supports Data Governance in an enterprise context.

Target Audience: AI Leader, Insurance, Decision Maker, Data Engineer
Prerequisites: Background knowledge AI, Big Data Technologies, BIA
Level: Advanced

Extended Abstract:
In times of Advanced Analytics and AI, enterprises are striving towards automated and operationalized analytics pipelines.

In this session, ERGO and saracus consulting present the ERGO Group AI Factory. In particular, the presentation retraces how ERGO in collaboration with saracus consulting evolved from an on-premises analytics environment to an automated AI-Ops environment running on modern technologies within the AWS Cloud.

To this end, strategic aspects of delivering AI as a service as well as important components for delivering automated AI Pipelines in enterprises are highlighted.

Furthermore, the speakers take a deep dive into the technical aspects of the AI Factory's metadata driven data ingestion pipeline, emphasizing how it supports the key functionalities for Data Governance within ERGO's Data Strategy.

Felix joined the ERGO Group in 2004, starting in the finance department. Following, he had been heading a team with (IT) responsibility for data management, reporting and self-service BI in the Property / Casualty segment. In his current position as Head of Data Engineering, he is responsible for ERGO's Advanced Analytics IT platforms as Business Owner and ensures with his team the AI data flow.
Senior Project and Program Manager for ITERGO, the IT service provider of the ERGO Group and responsible program manager for the AI IT program to rollout and implement various AI features within the ERGO Group 
After graduating in Information Systems from Münster University, Lukas joined saracus consulting in 2015 and started consulting in the realm of data engineering. 
In his current position as Head of Data Engineering, Lukas leads a team of Data Engineers involved in multiple projects in Germany and Switzerland.
Felix Wenzel, Christian Kundruß, Lukas Hestermann
Felix Wenzel, Christian Kundruß, Lukas Hestermann
Vortrag: Mo 4.5
flag VORTRAG MERKEN

Vortrag Teilen

17:10 - 17:30
Short Break
Short Break
Short Break

18:15 - 19:15
Welcome Reception
Welcome Reception
Welcome Reception

Digital Welcome Reception

09:45 - 10:00
Short Break
Short Break
Short Break

10:00 - 10:40
Di 4.1
End-to-End-Use Case: Metadata Management & DWH
End-to-End-Use Case: Metadata Management & DWH

Like many companies, the 3 banks face the challenge of implementing data governance. With an end-to-end approach for metadata – from the business definition to the DWH implementation – a basis was created for this. The use cases 'IT requirements', 'data quality' and 'data definitions' were the focus of the resource-saving project. The target groups for the metadata are primarily the LoB, especially risk management, but also IT.

Target Audience: Data Governance Manager, Risk Manager, Data Quality Manager, Data Warehouse Architects, Data Modeler
Prerequisites: Basic knowledge
Level: Basic

Clemens Bousquet ist Risikomanager in der Oberbank, die Teil der österreichischen 3-Banken-Gruppe ist. Sowohl in einer Leitungsfunktion des Risikomanagements als auch in zahlreichen Projekten hat er eine hohe Expertise in der fachlichen Datenmodellierung, der Einführung von Data Governance, aber auch der Umsetzung von BCBS 239 und IFRS 9 aufgebaut. Zuvor hat er die Diplomstudien in Volkswirtschaftslehre und Internationale Wirtschaftswissenschaften absolviert.

Lisa Müller ist Senior Consultant bei dataspot. Seit mehreren Jahren setzt sie sich im Projektgeschäft - besonders im Bankwesen - mit Fragestellungen an der Schnittstelle zwischen Fachbereich und Technik auseinander und erarbeitet kundenindividuelle Lösungen. Ihre exzellente fachliche Expertise kommt dabei im gesamten Lebenszyklus von Metadaten und Data Governance zur Geltung, wobei sie besonderen Wert auf die fachliche Datenmodellierung und das Metadatenmanagement legt. Neben zwei abgeschlossenen Bachelorstudien hält sie einen Master in Betriebswirtschaft.

Clemens Bousquet, Lisa Müller
Clemens Bousquet, Lisa Müller
Vortrag: Di 4.1
flag VORTRAG MERKEN

Vortrag Teilen

10:40 - 11:10
Break
Break
Break

12:20 - 12:30
Short Break
Short Break
Short Break

13:00 - 14:00
Lunch Break
Lunch Break
Lunch Break

14:40 - 14:50
Short Break
Short Break
Short Break

14:50 - 15:30
Di 2.4
From BI to AI & analytics industrialization
From BI to AI & analytics industrialization

I will show the journey that Tires took from its first attempts to extend their BI services for analytics to operate a mission critical industrialization environment which runs several AI projects. Beneath the problems and obstacles that were taken I will also explain the decisions that were taken and show the industrialization environment which was created. I also will explain why it was necessary to have such an environment instead of making use of classical BI tools.

Target Audience: Project leader, decision makers, CIO, Software engineers
Prerequisites: None
Level: Basic

Extended Abstract:

When Continental Tires started to establish Data Science as a function it decided to grow this out of BI but with an external employee. The journey was a long one but due to the given experience Continental was able to leave out some of the painful experiences that were made in other companies. So they decided to build up an infrastructure for industrialization of Use Cases from the first minute. The example of Tires is a good one to understand the roadmap from BI to Data Science industrialization. While in the beginning there was the hope that a simple extension of BI and BI tools would deliver an answer today the organization is a complete different one. Also Tires created an own landscape for industrialization of AI. Why this is done so and why this might be useful will be shown in the talk.

Studied Sociology with focus on statistical methods Dubravko Dolic made his first IT/programming experience already during his time at universities in Oldenburg and Northern Ireland. His professional career was shaped by a huge number of projects located between IT and business. Always striving to make data analyses easy available he was consultant for BI and Data analysis for more than 15 years. Since 2017 he is responsible for Data Science at the central IT for Continental Tires.
Dubravko Dolic
Dubravko Dolic
Vortrag: Di 2.4
flag VORTRAG MERKEN

Vortrag Teilen

14:50 - 15:30
Di 3.4
Integration of SAP data into a common data model
Integration of SAP data into a common data model

In the past, data was often stored in a monolithic data warehouse. Recently, with the advent of big data, there has been a shift to work directly with files. The challenge therefore arises in data management and storing metadata information. In this presentation, I will show how SAP (ERP or BW) data can be extracted using SAP Data Intelligence (ODP framework) and stored along with their metadata information. These data are stored in a Common Data Model (CDM) format and can be easily integrated and consumed with various products.

Target Audience: Professionals who would like to integrate SAP data into a Data Platform (e.g. Datalake) and include metadata information.
Prerequisites: Basic understanding of the SAP integration framework ODP, and cloud infrastructure (e.g. Azure Datalake Storage)
Level: Advanced

Julius von Ketelhodt hat Geophysik und Geoinformationswissenschaften an den Universitäten Witwatersrand (Südafrika) und Freiberg studiert und in Geophysik und Seismologie promoviert.

Seit mehreren Jahren beschäftigt er sich in Data & Analytics Projekten von Großkunden unterschiedlichster Industrien explizit mit der Fragestellung, wie betrachtungsrelevante Daten aus verschiedenen SAP Quellsystemen in Cloud Data Plattformen führender Hersteller integriert werden können. Zudem hat er das Konzept des Common-Data-Models (CMD) in Zusammenarbeit mit SAP, Microsoft und Adobe aus der Taufe gehoben und weiterentwickelt.

Beim Data & AI Beratungshaus initions ist er seit knapp 4 Jahren als BI-Consultant tätig und zeichnet gleichzeitig als Product Lead im hauseigenen SAP Product Team verantwortlich.

Julius von Ketelhodt ist verheiratet und lebt mit seiner Frau und seinen zwei Kindern in Hamburg.

Julius von Ketelhodt
Julius von Ketelhodt
Vortrag: Di 3.4
flag VORTRAG MERKEN

Vortrag Teilen

15:30 - 16:00
Break
Break
Break

16:00 - 17:10
Di 2.5
Quantifying the impact of customer experience improvements on churn via causal modeling
Quantifying the impact of customer experience improvements on churn via causal modeling

Quantifying the impact of customer experience (CX) improvements on the financials is crucial for prioritizing and justifying investments. In telecommunication as well as other subscription-based industries, churn is one of or the most important financial aspects to take into account. The presented approach shows how the churn impact of CX improvements – measured via Net Promoter Score (NPS) – can be estimated based on structural causal models. It makes use of algorithms for causal discovery and counterfactual simulation.

Target Audience: Data Scientist, Decision Maker
Prerequisites: basic understanding of statistical modeling
Level: Advanced

Dr. Björn Höfer manages a team for Advanced Analytics & Data Science at Telefónica Germany. He studied International Business at the University of Paderborn and received his PhD from Friedrich-Alexander-University (FAU) of Nuremberg for enhancing simulated test market methodology. His team at Telefónica uses modern data science methods to improve decision making at Telefónica.
Björn Höfer
Björn Höfer
Vortrag: Di 2.5
flag VORTRAG MERKEN

Vortrag Teilen

17:20 - 18:10
Special Keynote
SPECIAL KEYNOTE: Lessons from the Racetrack with Gary Paffett, Championship Winning Racing Driver
SPECIAL KEYNOTE: Lessons from the Racetrack with Gary Paffett, Championship Winning Racing Driver

Two-time DTM Champion, Gary Paffett, has competed at the highest levels of professional motorsport for two decades, racing in series including DTM, Formula 1 and the ABB FIA Formula E World Championship. In his role as Sporting & Technical Advisor and Reserve & Development Driver to the Mercedes-EQ Formula E Team, Gary shares his knowledge and expertise in a role that has been a major factor in the development of both the team and the car in Mercedes’ first two seasons within the all-electric street racing series. 
Gary Paffett
Gary Paffett
Track: Keynote
Vortrag: Special Keynote
flag VORTRAG MERKEN

Vortrag Teilen

09:00 - 09:40
Mi 1.1
Integrating SAP data into a modern analytics warehouse on Google Cloud in an automated fashion
Integrating SAP data into a modern analytics warehouse on Google Cloud in an automated fashion

Transforming an organization to become more data driven usually presents a set of technological challenges. In this session you will learn how to integrate existing applications data in real time to achieve a useful critical mass of explorable data.

Target Audience: CTOs, CIOs, CDOs, data engineers, data modelers, data analysts, data scientists
Prerequisites: General understanding of data in the context of the enterprise
Level: Basic

Extended Abstract:
'Data is the new oil' these days many organizations are in the midst of a digital transformation. In order to enable data driven processes, it is imperative to leverage the treasure trove of data hidden away in already existing applications. Extracting data from those applications (the E in ETL) is commonly the biggest pain point of data engineering. Many systems were not built to be extracted, so therefore data engineering needs to carefully balance performance versus load. At the same time customers demand more: nightly batch processing is not good enough anymore, the need for real time information is growing.

How to align all of this? Enter Change Data Capture (CDC). Based on a real live use case, you will learn how to integrate data from an existing mission critical SAP application into a modern cloud-based analytics warehouse. We will tell the story of our journey from the initial use cases, constraints that we faced, discoveries along the way to the final design and our appetite for more.

Matthias is a globally experienced IT executive with core competencies in Big Data analytics, retail, e-commerce, loyalty & payment. More than 20 years of experience in developing scalable retail solutions. Currently building a data platform for BI, advanced analytics & data science at Breuninger.com
Matthias Krenzel
Matthias Krenzel
Vortrag: Mi 1.1
flag VORTRAG MERKEN

Vortrag Teilen

09:00 - 10:30
Mi 4.1
Is there life beyond hadoop – should data lakes be drained?
Is there life beyond hadoop – should data lakes be drained?

The past year or two has seen major changes in vendor support for the extended Hadoop ecosystem, with withdrawals, collapses, and mergers, as well as de-emphasis of Hadoop in marketing. Some analysts have even declared Hadoop dead. The reality is more subtle, as this session shows, through an exploration of Hadoop's strengths and weaknesses, history, current status and prospects. Discussion topics include plans for initiating new Hadoop projects and what to do if you have already invested, successfully or otherwise, in data lakes.

Target Audience:
Enterprise-, systems-, solutions- and data architects

Systems-, strategy- and business intelligence managers

Data warehouse and data lake systems designers and developers

Data and database administrators

Prerequisites:
Basic knowledge of data management principles

Experience of designing data warehouses or lakes would be useful

Level: Basic

Extended Abstract:
Hadoop and its extended menagerie of related projects have been responsible for a decade of reinvention in the data management world. On the positive side, projects that would have been impossible with traditional technology have become not only possible but commonplace in today's environment. Hadoop has been at the heart of the explosion of digital business.

However, Hadoop has also been beset by a long list of complaints and problems, particularly relating to data management and systems management. Some of Hadoop's early strengths have been eroded as relational database vendors have upped their game. Furthermore, the rapid growth of Cloud solutions to many massive data delivery and processing projects has impacted Hadoop's commercial viability.

All these factors have led to a questioning of the future of the extended Hadoop ecosystem. This session answers these questions, not just from a technological viewpoint, but in the broader context of existing investments in skills and infrastructure, the direction of digital transformation, and the changing socio-political environment in which business operates.

Dr. Barry Devlin is a founder of data warehousing, defining its first architecture in 1985. He is respected worldwide as a visionary on BI and big data, publishing the seminal book 'Business unIntelligence' in 2013. With 30+ years' IT experience in IBM and 9sight, he offers thought-leadership and consulting to BI and big data buyers and vendors. He speaks and writes on all aspects of information.
Barry Devlin
Barry Devlin
Vortrag: Mi 4.1
flag VORTRAG MERKEN

Vortrag Teilen

09:40 - 09:50
Short Break
Short Break
Short Break

09:50 - 10:30
Mi 2.2
Quality whisperer – self-learning AI for production quality
Quality whisperer – self-learning AI for production quality

ZF plant Saarbrücken, Germany, manufactures around 11,000 transmissions per day. With 17 basic transmission types in 700 variants, the plant manages a large number of variants. Every transmission consists of up to 600 parts. An AI project was started to get reliable + fast results on root cause discovery. Speed is important because production runs 24 hours/7 days a week. The target is to reduce waste in certain manufacturing domains by 20%. The key success factor is the fast detection mechanism within the production chain delivered by AI.

Target Audience: Production manager, quality manager, CDO, CIO
Prerequisites: none
Level: Basic

Extended Abstract:
The ZF plant Saarbrücken, Germany, manufactures around 11,000 transmissions per day. With 17 basic transmission types in 700 variants, the plant manages a large number of variants. Every transmission consists of up to 600 parts. Each transmission is 100% tested in every technical detail before shipment. The plant Saarbrücken is a forerunner and lead plant in innovative Industry 4.0 technologies. Therefore, activities were started to tackle one significant challenge, which is caused by the enormous variant diversity: Finding root-causes for unsuccessful end of line testing. The management of the complexity is a big challenge because transmission parts can be produced in a huge number of variant processes. Process experts from each domain, like quality and testing, assembly departments and manufacturing units, had to spend significant time in analyzing influencing factors for malfunctioning and deciding on best action to prevent end of line test failures.

Therefore, an AI project was started with the objective to get reliable and fast results on root cause discovery. Speed is important because production runs 24 hours / 7 days a week. The sooner the real reasons for malfunctions are discovered, the sooner activities can be implemented to avoid bad quality. This saves a lot of time and reduces significant waste. The Target is to reduce waste in certain manufacturing domains by 20%. The key success factor is the fast detection mechanism within the production chain delivered by AI.

Complex root-cause findings can be reduced from several days to hours.

ZF's intention with the digitalization approach is to deliver fast information to the people who are responsible for decision processes to keep a plant in an optimal output with high quality products. A self-learning AI solution Predictive Intelligence from IS Predict was used to analyze complex data masses from production, assembly, and quality to find reliable data patterns, giving transparency on disturbing factors/factor combinations. For training algorithms, end to end tracing data was used, made available in a data lake.

Britta Hilt beschäftigt sich seit 2011 mit der Anwenderseite von Künstlicher Intelligenz. Sie ist Mitbegründerin und Geschäftsführerin der KI-Firma IS Predict, die sich einen Namen gemacht durch ihre Automatisierung von Data Science sowie durch erklärende KI. Vor der Gründung von IS Predict war sie über 15 Jahre in einer internationalen IT-Firma (IDS Scheer / Software AG) tätig, zuletzt als Director verantwortlich für Product Management und Solution Marketing.

Britta Hilt
Britta Hilt
Vortrag: Mi 2.2
flag VORTRAG MERKEN

Vortrag Teilen

10:30 - 11:00
Break
Break
Break

11:00 - 12:10
Mi 4.3
Data science workbenches and machine learning automation – new technologies for agile data science
Data science workbenches and machine learning automation – new technologies for agile data science

This session looks at how data science workbenches and machine learning automation tools can help business analysts to become data scientists and so meet the demand of business.

Target Audience: CDO, Head of Analytics, Data Scientist, Business Analysts, CIO
Prerequisites: Basic understanding of Data Science
Level: Advanced

Extended Abstract:
The demand for analytics is now almost everywhere in the business. Analytics are needed in sales, marketing and self-service, finance, risk, operations, supply chain and even HR. However, the current shortage of data scientists and the reliance on detailed skills such as programming, has led many corporate executives to question current approaches to development of high value analytical models and ask if they can be accelerated in any way to improve agility and reduce time to value. This session looks at this problem in detail and at how emerging data science workbenches and machine learning automation tools can help reduce the reliance on highly skilled data scientists and allow business analysts to become data scientists and so meet the demand of business.

 

o            The explosion in demand for analytics

o            Data science and the modern analytical ecosystem

o            Challenges with current approaches to analytics

o            Requirements to reduce time to value and accelerate development of analytical models

o            Improving productivity by integrating Information catalogs and data science workbenches, e.g. Amazon SageMaker, Cloudera CDP Machine Learning, IBM Watson Studio Microsoft, Azure ML Service,

o            Accelerating model development, monitoring and model refresh using ML automation tools, e.g. DataRobot, SAS, Dataiku Data Science Studio, Big Squid

o            Facilitating rapid analytics deployment via analytics as a service to maximise effectiveness and competitive edge

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
Vortrag: Mi 4.3
flag VORTRAG MERKEN

Vortrag Teilen

12:10 - 12:25
Short Break
Short Break
Short Break

12:25 - 13:10
KeyMi
KEYNOTE: Corporate IT Is a Theater Piece - What Role Are You Playing?
KEYNOTE: Corporate IT Is a Theater Piece - What Role Are You Playing?

The engagement of IT staff in organizations has been done for decades via a single function or department. Whatever title it bears, the single counter IT takes care of everything under the digital sun. This model generates unhealthy behaviors in the IT ranks that are detrimental to the enterprises that need digital to operate, evolve, transform —or survive. 

Drawing a parallel with a more mature industry, the current distribution of roles is analyzed and compared. It shows that the standard structure is creating conflicts of roles that would be unacceptable —and in some cases illegal— in many other fields of work.  

The typical IT engagement model in organizations has a direct effect on what constitutes success and how it is measured. These measures -in their current state- create a ripple effect on quality and value of digital investments.  

You should come to the inevitable conclusion:  it is more than time to radically re-think how technology teams engage in organizations.

Over three decades, R.M. has held multiple roles in the IT departments of mid to large organizations: programmer, business analyst, tester, database administrator, solution architect, data architect, enterprise architect, systems integrator. He also held several IT management positions, leading teams of architects and designers in insurance, telecommunications, banking, and travel & transportation.  
As a result of that relentless quest for an answer to all sorts of questions, R.M. has published a business book and regularly writes on LinkedIn and his blog about IT strategic management challenges.  He is also a member of the core team of the Intersection Group, a multi-disciplinary focussed on sharing ways to design better enterprises. 
Mr Bastien holds a bachelor’s degree in Management Information Systems and a Research-Oriented Masters in Business Administration. He’s been a certified project management professional PMP© for more than 15 years. When not working professionally, he renovates old houses or skips sailboats across oceans.
R.M. Bastien
R.M. Bastien
Track: Keynote
Vortrag: KeyMi
flag VORTRAG MERKEN

Vortrag Teilen

13:10 - 14:30
Lunch Break
Lunch Break
Lunch Break

14:30 - 18:10
Mi 4.4
Best practices in DataOps for analytics
Best practices in DataOps for analytics

Analytics team are struggling to create, publish, and maintain analytics to meet demand. Many analytics projects fail to meet expectations and deliver value. DataOps is the new approach combining tools and approaches to simplify the development of analytics and ensuring high quality data. DataOps shortens the life cycles, reduces technical debit and increases analytics success. This session covers the best practices for the analytics team to deliver DataOps.

Target Audience: Data scientists, project leaders, analyst, data engineers, project sponsors, business leaders
Prerequisites: None
Level: Professional

Extended Abstract:
DataOps is the next generation of analytics delivery that addresses the major issues of analytics technical debt, reuse of code, data quality, and incorporates the flexibility of agile. This session defines what DataOps is (it is not DevOps) and highlights the best practices to address some of the largest challenges preventing analytics teams from delivering analytics value. The session outlines how DataOps is used to inspire teamwork, improve data quality, incorporate agile delivery, and establish an environment for reuse and innovation.

With over 20 years of experience, Dr. Larson is an active practitioner and academic focusing on BI, data warehousing, analytics, and AI. Dr. Larson completed her doctorate in management in information technology leadership. She holds Project Management Professional (PMP) and Certified Business Intelligence Professional (CBIP) certifications. Dr. Larson attended AT&T Executive Training at Harvard Business School in 2001, focusing on IT leadership. She is a regular contributor to TDWI publications and presents several times a year at conferences. Dr. Larson is principal faculty at City University of Seattle in the United States.
Deanne Larson
Deanne Larson
Vortrag: Mi 4.4
flag VORTRAG MERKEN

Vortrag Teilen

15:40 - 16:10
Break
Break
Break

16:10 - 16:50
Mi 3.5
Machine learning raises industry 4.0 to the next level
Machine learning raises industry 4.0 to the next level

How is machine learning used in the real world? How did our customer Liebherr mitigate the problem of unreliable suppliers, therefore making their manufacturing process more efficient? How can our customers work their way through thousands of documents, quickly identifying the relevant ones? In this talk, Data Scientist Björn Heinen will elaborate on his past and current projects in the manufacturing industry by explaining how tangible customer problems have been solved using classical machine learning algorithms, computer vision and more.

Target Audience: The session is intended to reach those interested who are working in the machine and system engineering industry or those in other sectors who would like to optimize their processes with the help of AI and use business data successfully in the long term.
Prerequisites: No prerequisites needed.
Level: Basic

Björn Heinen is a Senior Data Scientist at INFORM. He is involved in both, internal projects in which existing INFORM products are enhanced with machine learning functionalities, as well as external projects, which he follows from development to implementation.
Björn Heinen
Björn Heinen
Track: IoT
Vortrag: Mi 3.5
flag VORTRAG MERKEN

Vortrag Teilen

16:50 - 17:00
Short Break
Short Break
Short Break

Zurück