TDWI@Home

You can' t make it to Munich, but you want to be part of the TDWI Conference 2022? Then use our TDWI@Home ticket. We are looking forward to seeing you again!
 
» Get Your Ticket Now

Nach Tracks filtern
Nach Themen filtern
Alle ausklappen
  • Montag
    20.06.
  • Dienstag
    21.06.
  • Mittwoch
    22.06.
, (Montag, 20.Juni 2022)
09:30 - 10:30
KeyMo1
ERÖFFNUNG und KEYNOTE: Welcome to the Real World: Data, Science and Supply Chain network optimization at Amazon
ERÖFFNUNG und KEYNOTE: Welcome to the Real World: Data, Science and Supply Chain network optimization at Amazon

Have you ever ordered a product on Amazon websites and, when the box arrived, wondered how you got it so fast, how much it would have cost Amazon, how much carbon it emitted and what kinds of systems & processes must be running behind the scenes to power the whole operation?

Let’s take a look behind the scenes and open the doors of Amazon Data and Analytics teams to explore how the combination of our people and our advanced algorithms are working together to deliver to our customers millions of diverse products every day all across the globe. 

From the moment we order products from our vendor partners, until we deliver to our customer doorsteps, we use dozens of systems, pieces of software, Machine Learning algorithms and Petabytes of data to optimize our operations. Together, they orchestrate what we call our fulfillment network and optimize the reliability, delivery speed, cost and carbon emissions of our products, packages and trucks. 

Together, we will follow the journey of a customer order, and dive into the different steps of our fulfillment operations:

  1. Forecasting how much volume will flow on our network
  2. Buying the right quantities and placing our inventory at the right location
  3. Designing an efficient outbound network
  4. Executing operational excellence to delight customers

We strive to be the earth’s most customer centric company, today let’s take a look into what it means! 

Dominique Vitali is Director of the EU Customer Experience team at Amazon and in charge of Supply Chain and Transportation network optimization through analytics for the European customers – Delivery accuracy/Delivery
Speed/Fulfillment Cost Reduction/Carbon Intensity reduction. Managing a team of 25 analysts, program managers and data scientists.

Dominique Vitali
Dominique Vitali
Track: #Keynote
Vortrag: KeyMo1
flag VORTRAG MERKEN

Vortrag Teilen

10:45 - 12:15
Mo 5.1
ROOM K4 | Operationalizing Machine Learning in the Enterprise
ROOM K4 | Operationalizing Machine Learning in the Enterprise

What does it take to operationalize machine learning and AI in an enterprise setting? This seems easy but it is difficult. Vendors say that you only need smart people, some tools, and some data. The reality is that to go from the environment needed to build ML applications to a stable production environment in an enterprise is a long journey. This session describes the nature of ML and AI applications, explains important operations concepts, and offers advice for anyone trying to build and deploy such systems.

Target Audience: analytics manager, data scientist, data engineer, architect, IT operations
Prerequisites: Basic knowledge of data and analytics work
Level: Basic

Extended Abstract:
What does it take to operationalize machine learning and AI in an enterprise setting?
Machine learning in an enterprise setting is difficult, but it seems easy. You are told that all you need is some smart people, some tools, and some data. To travel from the environment needed to build ML applications to an environment for running them 24 hours a day in an enterprise is a long journey.
Most of what we know about production ML and AI come from the world of web and digital startups and consumer services, where ML is a core part of the services they provide. These companies have fewer constraints than most enterprises do.
This session describes the nature of ML and AI applications and the overall environment they operate in, explains some important concepts about production operations, and offers some observations and advice for anyone trying to build and deploy such systems.

Mark Madsen is a Fellow in the Technology & Innovation Office at Teradata where he works on the use of data and analytics to augment human decision-making and evolve organizational systems. Mark worked for the past 25 years in the field of analytics and decision support, starting with AI at the University of Pittsburgh and robotics at Carnegie Mellon University. He is also on the faculty of TDWI.

Mark Madsen
Mark Madsen
flag VORTRAG MERKEN

Vortrag Teilen

13:45 - 15:00
Mo 4.2
ROOM K3 | Helping organizations to master the data challenge
ROOM K3 | Helping organizations to master the data challenge

Missing data leadership, lack of vision, data-unliterate business units, data in silos, no data- and analytics-governance - The symptoms of a missing data strategy are unmistakable. Whilst organizations strive to exploit the benefits promised from data & analytics, corporate well thought data strategies are rather an exception than rule. We would like to exchange best practices and experiences for designing & implementing sustainable yet pragmatic data strategies for organizations.

Target Audience: Practitioners for data strategy consulting, (Data-) decision makers in organizations, Data leaders, BI & AI team leaders
Prerequisites: Experience and knowledge in the area of analytics, BI or AI; data use cases
Level: Basic

Extended Abstract:
TOC draft

  • Overview elements of a data strategy
  • Typical initial situations in organizations
  • Toolkits and methodologies when designing data strategies
  • Exchange of experiences & best practices 

Jens is a seasoned Data Scientist and Strategist with more than 15 years of professional experience in generating business value from data using Analytics, Data Science & AI. He led many data projects with measurable success for renowned international clients. Today, he helps organizations to design and implement data strategies for their digital transformation journeys.

Boris and his team are working passionately to drive the adoption of solutions and processes that enable people to make healthy, data driven decisions. These approaches cover the entire data value added chain starting from raw data to sophisticated Business Intelligence Applications or AI solutions based on modern data science.

Jens Linden, Boris Michel
Jens Linden, Boris Michel
Vortrag: Mo 4.2
flag VORTRAG MERKEN

Vortrag Teilen

13:45 - 15:00
Mo 5.2
ROOM K4 | KI-Lösung ist das Ziel - mit ML Engineering erreichen Sie es
ROOM K4 | KI-Lösung ist das Ziel - mit ML Engineering erreichen Sie es

Künstliche Intelligenz ist schon längst dem Pionierzeitalter entwachsen. Doch um mit dem Einsatz von KI einen echten Mehrwert für das Unternehmen zu schaffen, kommt es auf die qualitativ hochwertige Bereitstellung von Daten an. Hier kommt ML Engineering ins Spiel - ein Konzept zur Bewältigung der hohen Komplexität von Daten bei der Entwicklung von KI-Systemen. Im Vortrag wird eine ML Engineering Roadmap vorgestellt, mit der dieses häufig unterschätzte und doch so kritische Konzept erfolgreich eingesetzt werden kann.

Zielpublikum: Data Engineer, Data Scientist, Unternehmer mit praktischem KI-Interesse
Voraussetzungen: Interesse an KI- und ML-Themen, Grundlagen- bis fortgeschrittene Kenntnisse in den Bereichen Data Science und/oder Data Engineering
Schwierigkeitsgrad: Fortgeschritten

Lars Nielsch ist als Principle Solution Architect Analytics & Cloud bei Adastra tätig. Nach seinem Studium der Angewandten Informatik an der TU Dresden ist er seit 1998 in der BIA-Beratung tätig. Seine besonderen Interessen liegen in den Themen Enterprise BI, Large Databases, Data Engineering (ETL-Design), Data Science (MLOps) und Big-Data-Architekturen (Data Vault, Data Lake, Streaming).

ROOM K4 | One Size Does Not Fit All: Make The Right Data Mesh For You
ROOM K4 | One Size Does Not Fit All: Make The Right Data Mesh For You

As the data mesh paradigm takes the industry by storm, the conversation deep dives into the architecture, neglecting the socio-organizational element. Data driven organizations must invest not only in infrastructure but also data organization and culture. 

Target Audience: Executive, senior business managers
Prerequisites: None
Level: Basic

Jennifer Belissent joined Snowflake as Principal Data Strategist in early 2021, having most recently spent 12 years at Forrester Research as an internationally recognized expert in establishing data and analytics organizations and exploiting data's potential value. Jennifer is widely published and a frequent speaker. Previously, Jennifer held management positions in the Silicon Valley, designed urban policy programs in Eastern Europe and Russia, and taught math as a Peace Corps volunteer in Central Africa. Jennifer earned a Ph.D. and an M.A. in political science from Stanford University and a B.A. in econometrics from the University of Virginia. She currently lives in the French Alps, and is an avid alpinist and intrepid world traveler.

Lars Nielsch
Jennifer Belissent
Lars Nielsch

Vortrag Teilen

Jennifer Belissent
flag VORTRAG MERKEN

Vortrag Teilen

15:30 - 16:45
Mo 5.3
ROOM K4 | Data Management 4 AI - TDWI Community Talk inkl. Panel
ROOM K4 | Data Management 4 AI - TDWI Community Talk inkl. Panel

The real magic of AI lays in well managed data to build and train the underlying models. Accordingly, streamlined data management process are essential for success in AI. In this session we are going to discuss data management for AI and ask questions like 'What is data management for AI?', 'Are there difference to well-known approaches from BI & Analytics' and 'Do we need special AI data engineers?'.
TDWI Community Talk is an open format to discuss current topics in the area of data analytics within the TDWI community.

Target Audience: All data entheusiasts
Prerequisites: No prerequisites
Level: Basic

Extended Abstract:
The area of artificial intelligence is currently trending and transforms BIA landscapes in many organizations. There are many new initiatives and promises, however, to build all these fancy applications a well-thought data management is necessary. Nevertheless, the discussion of AI often focuses various models and cool programming languages and the underlying data engineering is often neglected. This is why this session focuses data management for AI and discusses approaches and best practices with the TDWI community.

The goal of this session is:

  1. Give the audience an overview what 'Data Management for AI' means and what basic terms are.
  2. Discuss current best practices and challenges with experts and the audience.
  3. Reflect different views on differences between processes in AI and BI, the role of a data engineer, software tools and many more.

The 'TDWI Data Schnack' is an interactive format that wants to engange the discussion in the TDWI community. It provides a plattform that highlights different aspects of a current topic and inspires discussions between experts and other community members. Therefore, the course of a Data Schnack session contains a short introduction talk that introduces a basic understanding of the topic. Followed by a panel discussion with experts from different fields. Lastly, an open discussion integrates the audience to share knowledge between all participants.

Julian Ereth is a researcher and practicioner in the area of Business Intelligence and Analytics. As a solution architect at Pragmatic Apps he plans and builds analytical landscapes and custom software solutions. He is also enganged with the TDWI and hosts the TDWI StackTalk.

Timo Klerx ist Gründer und Data Scientist von und bei paiqo und hilft Kunden bei der Konzeption und Umsetzung von Projekten im Bereich Artificial Intelligence / Data Science / Machine Learning.
Die ersten Berührungen mit AI hatte Timo in einem Forschungsprojekt zur automatischen Manipulationserkennung von Geldautomaten.
Bevor er sein eigenes Startup gründete, sammelte er Erfahrungen in einem anderen Data Science Startup und fokussierte sich dort auf den Bereich Machine Analytics inkl. Use Cases wie Predictive Maintenance und Predictive Quality.
Weiterhin engagiert sich Timo bei diversen Data Science Meetups in Paderborn, Münster und gesamt NRW.

Malte Lange ist Produktverantwortlicher für Data Analytics bei der Finanz Informatik, dem zentralen Digitalisierungspartner in der Sparkassen-Finanzgruppe.
Die Schaffung von datengetriebenen Banking Lösungen ist seit 2019 sein Themenschwerpunkt. Unter anderem verantwortet er die omni-channelfähige Kundenansprache „Next Best Action“ für die digitale Finanzplattform OSPlus und sorgt für die Weiterentwicklung der zentralen Data Analytics Plattform für analytische Anwendungsfälle im OSPlus. Gemeinsam mit Partnern in der Sparkassen-Finanzgruppe entwickelt er neue datengetriebene Lösungsansätze für Sparkassen, um das Potential vorhandener Daten zu realisieren.    

Julian Ereth, Timo Klerx, Malte Lange
Julian Ereth, Timo Klerx, Malte Lange
flag VORTRAG MERKEN

Vortrag Teilen

17:15 - 18:30
Mo 5.4
ROOM K4 | Explainable AI - Why interpret-able models are good models
ROOM K4 | Explainable AI - Why interpret-able models are good models

Machine learning and AI have changed the world of data processing and automation at a breathtaking pace, at the cost of turning algorithms into hard-to-control and monitor black boxes.
We present methods and concepts of explainable AI that aim to open the black box and tame these algorithms.

Target Audience: Decision-Makers/Stake Holders in AI & model development, Data Scientists
Prerequisites: general awareness of modeling pipeline and challenges, no coding/math skill required
Level: Basic

Maximilian Nowottnick is a Data Scientist at the full-service data science provider Supper & Supper GmbH from Germany. He has a B.Sc. and a M.Sc. in Physics and extensive knowledge in developing AI solutions in the areas of GeoAI and Mechanical Engineering. He was one of the driving engineers to develop the first SaaS solution of Supper & Supper, called Pointly for 3D point cloud classification.

ROOM K4 | Harness the power of language with NLP in the Cloud
ROOM K4 | Harness the power of language with NLP in the Cloud

Natural Language Processing (NLP) allows us to deeply understand and derive insights from language, ultimately leading to more automated processes, lower costs, and data-driven business decisions. 
Google is recognized as a market leader in AI and has built a range of solutions incorporating NLP to address a myriad of business challenges. This talk will introduce a few possible solutions, as well as some business use cases on how to incorporate them in a variety of industries.

Target Audience: Middle and upper-level management, Business users with AI/machine learning challenges, BI/Data professionals
Prerequisites: Basic knowledge of machine learning and cloud technology, interest in NLP
Level: Intermediate

Catherine King is a Customer Engineer at Google Cloud and is a Google Cloud Certified Professional Data Engineer. She works with customers in the Public Sector and supports them in digital transformations, big data analytics, and artificial intelligence implementations. Before Google, she worked for many years in the Translation Industry designing Machine Translation models for enterprise clients.
Catherine holds an MSc in Data Science and is passionate about decision science and data-driven cultures.

Maximilian Nowottnick
Catherine King
Maximilian Nowottnick

Vortrag Teilen

Catherine King
flag VORTRAG MERKEN

Vortrag Teilen

, (Dienstag, 21.Juni 2022)
09:00 - 10:15
Di 3.1
ROOM K3 | Data Architecture: Data Lake vs Lakehouse vs Data Mesh
ROOM K3 | Data Architecture: Data Lake vs Lakehouse vs Data Mesh

In order to succeed in creating a data driven enterprise it is clear that choosing the right data architecture is now critical. This session explores the evolution of data and analytics architecture and looks at what is needed to shorten time to value and create a data driven enterprise. It looks at the pros and cons of data lake, lakehouse and data mesh architectures and asks: Is there a best approach? Is a lot more than this needed to succeed?

Target Audience: Data architects, CDOs, CAOs, enterprise architects, data scientists, business analysts
Prerequisites: Basic understanding of data architectures used in supporting analytical workloads
Level: Advanced

Extended Abstract:
In many companies today the desire to become data driven goes all the way to the boardroom. The expectation is that as more and more data enters the enterprise, it should be possible to understand and use it to quickly and easily drive business value. In order to succeed in creating a data driven enterprise it is clear that choosing the right data architecture is now critical. However, data and analytics architecture has been evolving over recent years to a point where now there are multiple options. Is it a data lake that is needed? Is it a lakehouse? Or is it a data mesh? Should this be the focus or is it just vendor hype to fuel their own interests?  What are the pros and cons of these options? Is there a best approach? Is a lot more than this needed to succeed? This session explores the evolution of data and analytics architecture and looks at what is needed to shorten time to value and create a data driven enterprise.

  • Data and analytics - where are we?
  • Data and analytics architecture evolution
  • Architecture options and their pros and cons - data lake Vs lakehouse Vs data mesh
  • The shift to data fabric, DataOps, and MLOps to industrialise pipeline development and model deployment
  • Using a data and analytics marketplace to putting analytics to work across the enterprise

 

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
flag VORTRAG MERKEN

Vortrag Teilen

10:45 - 12:00
Di 3.2
ROOM K3 | Data Lakehouse: Marketing Hype or New Architecture?
ROOM K3 | Data Lakehouse: Marketing Hype or New Architecture?

The data lakehouse is the new popular data architecture. In a nutshell, the data lakehouse is a combination of a data warehouse and a data lake. It makes a lot of sense to combine them, because they are sharing the same data and similar logic. This session discusses all aspects of data warehouses and data lakes, including data quality, data governance, auditability, performance, historic data, and data integration, to determine if the data lakehouse is a marketing hype or whether this is really a valuable and realistic new data architecture.

Target Audience: Data architects, enterprise architects, solutions architects, IT architects, data warehouse designers, analysts, chief data officers, technology planners, IT consultants, IT strategists
Prerequisites: General knowledge of databases, data warehousing and BI
Level: Basic

Extended Abstract:
The data lakehouse is the new kid on the block in the world of data architectures. In a nutshell, the data lakehouse is a combination of a data warehouse and a data lake. In other words, this architecture is developed to support a typical data warehouse workload plus a data lake workload. It holds structured, semi-structured and unstructured data. Technically, in a data lake house the data is stored in files that can be accessed by any type of tool and database server. The data is not kept hostage by a specific database server. SQL engines are also able to access that data efficiently for more traditional business intelligence workloads. And data scientists can create their descriptive and prescriptive models directly on the data.  

It makes a lot of sense to combine these two worlds, because they are sharing the same data and they are sharing logic. But is this really possible? Or is this all too good to be true? This session discusses all aspects of data warehouses and data lakes, including data quality, data governance, auditability, performance, immutability, historic data, and data integration, to determine if the data lakehouse is a marketing hype or whether this is really a valuable and realistic new data architecture.

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
flag VORTRAG MERKEN

Vortrag Teilen

12:15 - 13:00
KeyDi
KEYNOTE: The information enabled company – a long way to digital transformation
KEYNOTE: The information enabled company – a long way to digital transformation

Digital transformation is in a way a never-ending journey. Recent trends put high expectations on AI technologies for process automation, insight generation and decision support. In Hilti, we see information generated from our data as substantial contribution to the success of our business. 
We will describe how we put the user and the usage of information in the center of our initiative of an information enabled company. Hilti’s journey towards process, data and system consolidation serves as an excellent foundation for that. We present the foundational technologies we put in place to manage the increasing amount and variety of data, as well as our “Integrated Information Management” approach. We will especially focus on advanced analytics and AI and give examples for successful implementations, but also highlight challenges, especially when it comes to change management and taking the organization along.

In his function as Head of Information Management in Global IT, Ralf Diekmann is responsible for all reporting, data engineering, and analytics solutions of Hilti AG globally. Ralf holds a PhD in Computer Science from the Paderborn Center of Parallel Computing. He joined Hilti AG 22 years ago as research engineer and since then held various positions in business and IT incl. Global Process responsibility, SAP implementation manager, Head of Process Governance, and various leadership functions in Hilti’s Global IT department. 

Andreas Wagner is leading the Data Science team at Hilti. In this role he is delivering DS projects, shaping the DS strategy at Hilti, recruiting Data Scientists and further developing the necessary ML toolbox. Andreas Wagner has more than five years’ experience in this field and is nine years at Hilti. Andreas holds a PhD in theoretical Physics. 

Ralf Diekmann, Andreas Wagner
Ralf Diekmann, Andreas Wagner
Track: #Keynote
Vortrag: KeyDi
flag VORTRAG MERKEN

Vortrag Teilen

13:15 - 13:45
TDWInsights Di
Interview with Brian O'Neill
Interview with Brian O'Neill

At TDWI München Brian will hold a Workshop on "Designing Human-Centered Data Products". In this interview, he provides insights about what participants can expect in the workshop.

Brian T. O'Neill helps data product leaders use design to create indispensable ML and analytics solutions. In addition to helping launch several successful startups, he's brought design-driven innovation to DellEMC, Tripadvisor, JP Morgan Chase, NetApp, Roche, Abbvie, and others. Brian also hosts the Experiencing Data podcast, advises at MIT Sandbox, and performs as a professional percussionist.

Data Management 4 AI
Data Management 4 AI

Henning Baars und Julian Ereht berichten über Ihr neues Seminar "Data Management 4 AI". Warum ist das Thema gerade jetzt so aktuell? Welche Inhalte behandeln die beiden in Ihrem Seminar? Und wie entsteht eigentlich ein TDWI-Seminar? Einblicke hinter die Kulissen der TDWI Seminare.

Julian Ereth is a researcher and practicioner in the area of Business Intelligence and Analytics. As a solution architect at Pragmatic Apps he plans and builds analytical landscapes and custom software solutions. He is also enganged with the TDWI and hosts the TDWI StackTalk.

Dr. rer. pol. Henning Baars ist Akademischer Oberrat am Lehrstuhl für ABWL und Wirtschaftsinformatik 1 der Universität Stuttgart und Sprecher der Fachgruppe „Business Intelligence“ der Gesellschaft für Informatik. Seit 2003 ist er an der Universität Stuttgart beschäftigt. Aktuelle Forschungsthemen sind „Agile Business Intelligence“, „BI und Big Data“, „BI in the Cloud“ sowie „BI und Analytics im Internet of Things“.
Brian O'Neill
Julian Ereth, Henning Baars
Brian O'Neill
Vortrag: TDWInsights Di1

Vortrag Teilen

Julian Ereth, Henning Baars
Vortrag: TDWInsights Di2
flag VORTRAG MERKEN

Vortrag Teilen

14:30 - 16:00
Di 3.3
ROOM K3 | How to Design a Logical Data Fabric?
ROOM K3 | How to Design a Logical Data Fabric?

A popular new architecture for offering frictionless access to data is the data fabric. With a data fabric, existing transactional and data delivery systems are wrapped (encapsulated) to make all of them look like one integrated system. A data fabric enables all data consumers to access and manipulate data. Technically, data is accessed and used through services. But data fabrics cannot be bought, they need to be designed and developed. This session discusses key guidelines for designing data fabrics.

Target Audience: Data architects, enterprise architects, solutions architects, IT architects, data warehouse designers, analysts, chief data officers, technology planners, IT consultants, IT strategists
Prerequisites: General knowledge of databases, data warehousing and BI
Level: Advanced

Extended Abstract:
Companies are becoming increasingly dependent on data. Having access to the right data at the right time is essential. This implies that users need frictionless access to all the data, wherever it is stored, in a transactional database, a data warehouse, or a data lake. It does not matter to users where data comes from as long as it meets all their requirements. Users do not want to be hindered by all the data delivery silos. They want one system that gives them access to all the data they need.

The solution to provide frictionless access cannot be data warehouse-like, wherein all the data is copied (again) to one big central database. In this second era of data integration, integration must be achieved without copying. A new solution must be based on a single universal entry point to access all data. Where and how the data is stored, whether it is stored in various databases, must be completely hidden from data users.

A popular new architecture that supports this approach is data fabric. With a data fabric, existing transactional and data delivery systems are wrapped (encapsulated) to make all the independent systems look like one integrated system.  

A data fabric is formed by a software layer that resides on top of all the existing transactional silos and data delivery silos, enabling all data consumers to access and manipulate data. Technically, data is accessed and used through services.  

A real data fabric supports any type of service, whether this is a more transactional or analytical service. And especially the second group of services is complex to develop. Maybe analytical services based on predefined queries are not that complex to develop, but how are such services developed that need to deal with ad-hoc queries?

This session explains the need for data fabrics that support all types of services and discusses key guidelines for designing data fabrics. Technologies are discussed that help with developing such services.

  •  What a data fabric is, and why you need one
  • How you can architect a service-centric fabric to gain flexibility and agility
  • The data management and integration capabilities that are most relevant
  •  Where to start your journey to data fabric success
  •  What is logical data fabric?

 

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
Vortrag: Di 3.3
flag VORTRAG MERKEN

Vortrag Teilen

16:30 - 18:00
Di 3.4
ROOM K3 | Transition towards a collaborative Data Mesh cloud platform
ROOM K3 | Transition towards a collaborative Data Mesh cloud platform

SWICA historically runs a data warehouse built by a centralized team and in parallel, multiple isolated solutions for domain specific analyses, which afford high maintenance and an extensive effort to stay compliant.

Modernizing our analytical environment, we are building a collaborative platform on MS Azure, utilizing the Data Mesh paradigms of data domain and data product.

We aim to deliver a managed data marketplace for all data domains to provide their data products on a modern platform with low maintenance and built-in security & compliance.

Target Audience: Data Analysts, Data Engineers, Project Leaders, Decision Makers
Prerequisites: Basic understanding of the data mesh concept, data warehouse architectures and the challenges of diverse analytical use cases from multiple lines of business
Level: Advanced
 

15 years of BI industry experience as a project manager, analyst, team lead and solution architect. Closely following new concepts and technologies, aiming for practical application in the enterprise world.

Building planning and reporting solutions for small and medium-sized enterprises for more than 15 years, the opportunity to build a modern cloud based data platform for SWICA the leading health insurance company in Switzerland, is a challenge to develop my personality and skills. A special candy comes with the usage of the latest cloud technologies and a high flexibility for building the solution.

Tobias Rist, Philipp Frenzel
Tobias Rist, Philipp Frenzel
Vortrag: Di 3.4
flag VORTRAG MERKEN

Vortrag Teilen

, (Mittwoch, 22.Juni 2022)
09:00 - 10:30
Mi 2.1
ROOM K4 | Transforming Retail with Cloud Analytics - Petrol Case Study
ROOM K4 | Transforming Retail with Cloud Analytics - Petrol Case Study

Petrol is Slovenian company that operates in 8 countries in SEE with 5BEUR annual revenue. As traditional publicly-owned company, Petrol has faced necessity for transformation to stay ahead in highly competitive market. Use of BIA was mainly reactive, but in recent years it has transformed into competitive advantage by using cloud technologies and industry specific analytical models and focusing on the content and creating business value. This value is now being leveraged as competitive advantage through proactive use of data and analytics. 

Target Audience: Decision Makers, Data Architects, Project Managers 
Prerequisites: None 
Level: Basic 

Extended Abstract: 
Petrol is Slovenian company that operates in 8 countries in SEE with 5BEUR annual revenue. Main business activity is trading in oil derivatives, gas and other energy products in which Petrol generates more than 80 percent of all sales revenue and it also has a leading market share in the Slovenian market. Petrol also trades with consumer goods and services, with which it generates just under 20 percent of the revenue. Use of BIA was mainly reactive, but in recent years it has transformed into competitive advantage by using cloud technologies and industry specific analytical models and focusing on the content and creating business value. This value is now being leveraged as competitive advantage through proactive use of data and analytics. Presentation will cover main business challenges, explain technology architecture and approach and discuss results after three years of system development and use. 

Andreja Stirn is Business Intelligence Director with more than 20 years of experience working in the Oil & Energy and Telco industry. Skilled in Data Warehousing, Business Intelligence, Corporate Performance Management, Market Research and People Management.

Dražen Orešcanin is Solution Architect in variety of DWH, BI and Big Data Analytics applications, with more than 25 years of experience in projects in largest companies in Europe, US and Middle East. Main architect of PI industry standard DWH models. Advised Companies include operators from DTAG, A1 Group, Telenor Group, Ooredoo Group, Liberty Global, United Group, Elisa Finland, STC and many companies in other industries such as FMCG and utilities.

Andreja Stirn, Dražen Oreščanin
Andreja Stirn, Dražen Oreščanin
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 15:15
Mi 3.3
ROOM K4 | Merging User Research with Data Analytics – how adding a customer centric view into the analytics advances insights driven data culture
ROOM K4 | Merging User Research with Data Analytics – how adding a customer centric view into the analytics advances insights driven data culture

Data Analysts and Data Scientists invest an immense amount of time into optimizing models and interpreting data, all in the quest to promote better business decision making and more efficient product development. We oftentimes however fail to take a step back and answer the overarching question: Why does the user show the observed behavior pattern? Why does a certain variable improve the accuracy of our prediction model? Despite all the advances we have made in analytics, even predictive analytics and ML models cannot truly answer what the user was thinking and why they act in a certain way. 

Adding the customer perspective into the insights equation opens up a whole new perspective on this problem. As a consequence, XING merged the User Research and Analytics departments to create a more holistic approach to insights generation. This presentation walks through the problem statements, the differences in the professional fields (analytics and research) and how the individual segments of both disciplines are complementary and lead to a more user centric decision making organization.

Target Audience: anyone open to thinking outside of the regular patterns of analytics and AI/DS 
Prerequisites: none 
Level: Basic 
 

Marc Roulet is Director of Analytics, Research and SEO at XING, the leading business networking platform in Germany. In this role he supports the top management, business managers, product teams and marketing with insights to drive performance. This includes quantitative and qualitative user research, experimentation, forecasting, KPI definition, data visualization and analytical deep dives. A data evangelist at heart, Marc is dedicated to promoting a truly data driven mindset within the organization, breaking down complex data material into digestible and actionable insights for the business. Prior to his role at XING Marc worked in various leadership positions in the eBay Classifieds Group, at mobile.de and at ImmobilienScout24 in Business Development and as a Marketing and Sales Analyst. Marc started his career at eBay as a Business Analyst in the Trust and Safety Department, analyzing buyer and seller behavior and deriving seller standards.

Marc Roulet
Marc Roulet
Vortrag: Mi 3.3
flag VORTRAG MERKEN

Vortrag Teilen

15:30 - 16:15
KeyMi
KEYNOTE: VUCA-World on speed – keeping the promise of digitalization roadmaps in turbulent times
KEYNOTE: VUCA-World on speed – keeping the promise of digitalization roadmaps in turbulent times

For many years, technology gurus, transformation evangelists and many more have pictured a world that will dramatically change with incredible pace. Consequently, it was predicted that impacts on society, economy, environment, and political landscapes will leave no stone unturned. As a matter of fact, the current times feel as if these predictions have eventually become reality. The VUCA world is not only part of our daily life, but also even a nucleus in itself that demands resilience from individuals as well as societies and organizations. 
While climate change and pandemics seem to be part of the “new normal”, global conflicts get closer to the western world resulting in even more severe instabilities of supply chains, natural resource availabilities and much more - clearly stretching the long-held promise of a flourishing globalization. 
To avoid the “Uber yourself before you get Kodaked” pitfall, companies no matter what size are finding themselves coping with an environment that is certainly fiercer these days but at the same time allowing for new opportunities that need to be discovered and unlocked. But what’s the right strategy to capitalize on these opportunities if strategies itself are not even worth the paper written on? How to keep the pace with rapidly shortening technology lifecycles or tech innovations that don’t seem to deliver against their promise? Is it even worth to define comprehensive roadmaps on digital strategies and transformations? 
In his keynote, Thomas Kleine reflects on the value of defining digital roadmaps from a company perspective. He will incorporate not only his personal experiences but also refer to his employer’s journey over the last two years specifically propelled to the frontline of fighting the COVID-19 disease. What are the key learnings and what about the half-value time of these learnings if tomorrow comes with a completely different set of challenges? 

Since January 2017, Thomas Kleine has been CIO and Head of Digital at Pfizer Germany. He is a Master of Business Administration (MBA) and studied at the Universities of Osnabrück, Augsburg and Pittsburgh, PA. After graduating in 2001, he initially spent 5 years at KPMG Consulting/BearingPoint as a senior consultant before moving to Coca-Cola Germany in 2006. There he had various management positions within IT.

Thomas Kleine
Thomas Kleine
Track: #Keynote
Vortrag: KeyMi
flag VORTRAG MERKEN

Vortrag Teilen

Zurück