Hinweis: Die aktuelle TDWI-Konferenz finden Sie hier!

PROGRAMM

Die im Konferenzprogramm der TDWI München digital 2021 angegebenen Uhrzeiten entsprechen der Central European Time (CET).

Per Klick auf "VORTRAG MERKEN" innerhalb der Vortragsbeschreibungen können Sie sich Ihren eigenen Zeitplan zusammenstellen. Sie können diesen über das Symbol in der rechten oberen Ecke jederzeit einsehen.

Für alle, die eine alternative Darstellung bevorzugen bieten wir unser Programm-PDF an:
» Zum PDF-Download

Gerne können Sie die Konferenzprogramm auch mit Ihren Kollegen und/oder über Social Media teilen.

Track: Analyst Track

Nach Tracks filtern
Alle ausklappen
  • Montag
    21.06.
  • Mittwoch
    23.06.
10:10 - 10:50
Mo 3.1
The need for a unified big data architecture
The need for a unified big data architecture

Big data is not the biggest change in the IT industry but data usage. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. Unfortunately, it is hard for current data delivery systems to support new styles of data usage, such as data science, real-time data streaming analytics, and managing heavy data ingestion loads. This session discusses a real big-data-ready data architecture.

Target Audience:
Data architects, data warehouse designers, IT architects, analysts, enterprise architects, solutions architects
Prerequisites: Some understanding of data architectures
Level: Advanced

Extended Abstract:

It is not big data that is the biggest change in the IT industry, it is data usage that has changed most drastically the last ten years. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. This may mean that more data needs to be collected resulting in big data systems, but the goal remains to do more with data. Unfortunately, current data delivery systems don't have the right characteristics for supporting the requirements of big data systems, such as supporting data science, handling massive data streaming workloads, managing heavy data ingestion loads, and so on. Additionally, many organizations that are developing big data systems, are developing isolated data delivery systems, such as an isolated data lake, an isolated data streaming system, an isolated data services system, and so on. This avalanche of data delivery systems is not beneficial to the organization. The wheel is invented over and over again. It's time to design real big-data-ready data architectures. This session discusses such an architecture, including the supporting technologies, pros and cons.

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
Vortrag: Mo 3.1
flag VORTRAG MERKEN

Vortrag Teilen

11:20 - 12:30
Mo 3.2
Data virtualization in real life projects
Data virtualization in real life projects

Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, democratizing data, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it effectively and efficiently. In this session lessons learned, tips and tricks, do's and don'ts, and guidelines are discussed. And what are the biggest pitfalls? In short, the expertise gathered from numerous projects is discussed.

Target Audience: Data architects, data warehouse designers, IT architects, analysts, solutions architects
Prerequisites: Some understanding of database technology
Level: Advanced

Extended Abstract:
Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it properly. In this practical session the lessons learned are discussed. Tips and tricks, do's and don'ts, and guidelines are discussed. In addition, answers are given to questions, such as: How does data virtualization work and what are the latest developments? What are the reasons to choose data virtualization? How to make the right choices in architecture and technology? How does data virtualization in your application landscape? Where and how do you start with the implementation? And what are the biggest pitfalls in implementation? In short, the expertise gathered from numerous projects is discussed.

Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.

Rick van der Lans
Rick van der Lans
Vortrag: Mo 3.2
flag VORTRAG MERKEN

Vortrag Teilen

14:00 - 15:30
Mo 3.3
DataOps: using data fabric and a data catalog for continuous development of data assets
DataOps: using data fabric and a data catalog for continuous development of data assets

As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.

Target Audience: CDO, Data Engineer, CIO, Data Scientist, Data Architect, Enterprise Architect
Prerequisites: Understanding of data integration software and development using a CI/CD DevOps approach
Level: Advanced

Extended Abstract:
As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.

  • The increasingly complex data landscape
  • Challenges in governing and integrating data in a distributed multi-cloud environment
  • What is data fabric, why it is important and what technologies are on the market?
  • What is a data catalog and what products are on the market?
  • Why organizations need these technologies?
  • How do data catalogs and data fabric work together?
  • A closer look at data fabric capabilities
  • Defining common data entities in a catalog business glossary
  • Using a catalog for automatic data discovery
  • A DataOps CI/CD Approach to building reusable trusted data assets in using data fabric

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
Vortrag: Mo 3.3
flag VORTRAG MERKEN

Vortrag Teilen

16:00 - 17:10
Mo 3.5
No more metadata, let's talk context and meaning
No more metadata, let's talk context and meaning

Metadata is a long-time favourite at conferences. However, most attempts to implement real metadata solutions have stumbled. Recent data quality and integrity issues, particularly in data lakes, have brought the topic to the fore again, often under the guise of data catalogues. But the focus remains largely technological. Only by reframing metadata as context-setting information and positioning it in a model of information, knowledge, and meaning for the business can we successfully implement the topic formally known as metadata.

Target Audience:
- Enterprise-, Systems-, Solutions and Data Architects
- Systems-, Strategy and Business Intelligence Managers
- Data Warehouse and Data Lake Systems Designers and Developers
- Data and Database Administrators
- Tech-Savvy Business Analysts

Prerequisites: Basic knowledge of data management principles. Experience of designing data warehouses or lakes would be useful

Level: Basic

Extended Abstract:

Metadata has – quite correctly – been at the heart of data warehouse thinking for three decades. It's the sort of cross-functional and overarching topic that excites data architects and data management professionals. Why then can so few businesses claim success in implementing it? The real question is: implementing what?

This session answers that question by first refining our understanding of metadata, which has been made overly technical over the years and more recently been misappropriated by surveillance-driven organisations. With this new understanding, we can reposition metadata not as a standalone topic but as a vital and contiguous subset of the business information that is central to digital transformation. With this positioning, 'metadata implementation' will become a successful – but largely invisible – component of information delivery projects.

Dr. Barry Devlin is a founder of data warehousing, defining its first architecture in 1985. He is respected worldwide as a visionary on BI and big data, publishing the seminal book 'Business unIntelligence' in 2013. With 30+ years' IT experience in IBM and 9sight, he offers thought-leadership and consulting to BI and big data buyers and vendors. He speaks and writes on all aspects of information.
Barry Devlin
Barry Devlin
Vortrag: Mo 3.5
flag VORTRAG MERKEN

Vortrag Teilen

17:30 - 18:10
Mo 3.6
Der Mörder ist immer der Gartner
Der Mörder ist immer der Gartner

Ein sehr ernster humorvoller Blick auf die Modewellen der IT, ihre Auswirkung auf die Informationsprodukte im Unternehmen und ein methodischer Umgang mit dem Hype Cycle.

Der Mörder ist immer der Gartner, weil er bestimmt, wann ein Hype zu Ende ist und damit die Methode tötet. Typische Aussage fachfremder Menschen mit und ohne Budgetverantwortung: 'Warum machen wir das noch? Gartner sagt, dass ist nicht mehr aktuell.' Ungeachtet der Tatsache, dass die dahinterliegende Methode aus den 70ern stammt...

Zielpublikum: Data Engineer, Data Scientist, Project Manager, Decision Maker, Data Architect

Voraussetzungen: Basic Knowledge in Company IT, Data Management

Schwierigkeitsgrad: Fortgeschritten

Extended Abstract:

Der Fokus liegt aktuell auf Tools statt auf Methoden. Tools stellen sicher, dass nur auf eine bestimmte Art und Weise gearbeitet wird. Zumindest ist das die Hoffnung. Informationsprodukte bestehen aus 3 Komponenten: Informationen, Prozesse und Menschen. Es wird Zeit, den Blick nicht nur auf Daten und Werkzeuge zu richten, sondern die Methoden dahinter zusätzlich in den Fokus zu nehmen und damit die beteiligten Menschen, in die Lage zu versetzen, das richtige zu tun. Dann kann aus Fehlern gelernt werden und die richtige Adaption für die eigene aktuelle Situation gefunden und gehalten werden.

Michael Müller beschäftigt sich seit 2001 mit Business Intelligence und Data Warehousing. Er engagiert sich als Vorstand der deutschsprachigen Data Vault User Group (DDVUG) für die Verbreitung von Data Vault. Sein Tätigkeitsbereich liegt vor allem an der Schnittstelle zum Fachbereich, bei der Architektur und Datenmodellierung sowie bei der Automatisierung und Beschleunigung der Entwicklung im Data Warehouse.

Er ist spezialisiert auf Daten-/Informationsmodellierung, Metadaten und modellgetriebene Automation sowie zeitliche Aspekte bei Informationssystemen. Er hat unter anderem Zertifizierungen als Data Vault 2.0 Practitioner und Anchor Modeler Version 2014. Er hält Vorträge und gibt Schulungen sowie Workshops zum Thema Data Vault und Data Warehouse Automation. Beide Themen werden die Data Warehouse Landschaft unter dem Oberbegriff 'Next Generation Data Warehousing' in Deutschland in den nächsten Jahren nachhaltig prägen. Ein Data Warehouse ist eine Reise und kein Ziel und nichts ist konstanter als der Wechsel bei diesem Unterfangen. Diese Reise erfolgreich zu gestalten, wird von immer größerer Wichtigkeit für viele Unternehmen werden. Als Data Warehouse Consultant und Trainer begleitet Oliver Cramer Unternehmen auf diesem Weg.
Michael Müller, Oliver Cramer
Michael Müller, Oliver Cramer
Vortrag: Mo 3.6
flag VORTRAG MERKEN

Vortrag Teilen

09:00 - 10:30
Mi 4.1
Is there life beyond hadoop – should data lakes be drained?
Is there life beyond hadoop – should data lakes be drained?

The past year or two has seen major changes in vendor support for the extended Hadoop ecosystem, with withdrawals, collapses, and mergers, as well as de-emphasis of Hadoop in marketing. Some analysts have even declared Hadoop dead. The reality is more subtle, as this session shows, through an exploration of Hadoop's strengths and weaknesses, history, current status and prospects. Discussion topics include plans for initiating new Hadoop projects and what to do if you have already invested, successfully or otherwise, in data lakes.

Target Audience:
Enterprise-, systems-, solutions- and data architects

Systems-, strategy- and business intelligence managers

Data warehouse and data lake systems designers and developers

Data and database administrators

Prerequisites:
Basic knowledge of data management principles

Experience of designing data warehouses or lakes would be useful

Level: Basic

Extended Abstract:
Hadoop and its extended menagerie of related projects have been responsible for a decade of reinvention in the data management world. On the positive side, projects that would have been impossible with traditional technology have become not only possible but commonplace in today's environment. Hadoop has been at the heart of the explosion of digital business.

However, Hadoop has also been beset by a long list of complaints and problems, particularly relating to data management and systems management. Some of Hadoop's early strengths have been eroded as relational database vendors have upped their game. Furthermore, the rapid growth of Cloud solutions to many massive data delivery and processing projects has impacted Hadoop's commercial viability.

All these factors have led to a questioning of the future of the extended Hadoop ecosystem. This session answers these questions, not just from a technological viewpoint, but in the broader context of existing investments in skills and infrastructure, the direction of digital transformation, and the changing socio-political environment in which business operates.

Dr. Barry Devlin is a founder of data warehousing, defining its first architecture in 1985. He is respected worldwide as a visionary on BI and big data, publishing the seminal book 'Business unIntelligence' in 2013. With 30+ years' IT experience in IBM and 9sight, he offers thought-leadership and consulting to BI and big data buyers and vendors. He speaks and writes on all aspects of information.
Barry Devlin
Barry Devlin
Vortrag: Mi 4.1
flag VORTRAG MERKEN

Vortrag Teilen

11:00 - 12:10
Mi 4.3
Data science workbenches and machine learning automation – new technologies for agile data science
Data science workbenches and machine learning automation – new technologies for agile data science

This session looks at how data science workbenches and machine learning automation tools can help business analysts to become data scientists and so meet the demand of business.

Target Audience: CDO, Head of Analytics, Data Scientist, Business Analysts, CIO
Prerequisites: Basic understanding of Data Science
Level: Advanced

Extended Abstract:
The demand for analytics is now almost everywhere in the business. Analytics are needed in sales, marketing and self-service, finance, risk, operations, supply chain and even HR. However, the current shortage of data scientists and the reliance on detailed skills such as programming, has led many corporate executives to question current approaches to development of high value analytical models and ask if they can be accelerated in any way to improve agility and reduce time to value. This session looks at this problem in detail and at how emerging data science workbenches and machine learning automation tools can help reduce the reliance on highly skilled data scientists and allow business analysts to become data scientists and so meet the demand of business.

 

o            The explosion in demand for analytics

o            Data science and the modern analytical ecosystem

o            Challenges with current approaches to analytics

o            Requirements to reduce time to value and accelerate development of analytical models

o            Improving productivity by integrating Information catalogs and data science workbenches, e.g. Amazon SageMaker, Cloudera CDP Machine Learning, IBM Watson Studio Microsoft, Azure ML Service,

o            Accelerating model development, monitoring and model refresh using ML automation tools, e.g. DataRobot, SAS, Dataiku Data Science Studio, Big Squid

o            Facilitating rapid analytics deployment via analytics as a service to maximise effectiveness and competitive edge

Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.

Mike Ferguson
Mike Ferguson
Vortrag: Mi 4.3
flag VORTRAG MERKEN

Vortrag Teilen

14:30 - 18:10
Mi 4.4
Best practices in DataOps for analytics
Best practices in DataOps for analytics

Analytics team are struggling to create, publish, and maintain analytics to meet demand. Many analytics projects fail to meet expectations and deliver value. DataOps is the new approach combining tools and approaches to simplify the development of analytics and ensuring high quality data. DataOps shortens the life cycles, reduces technical debit and increases analytics success. This session covers the best practices for the analytics team to deliver DataOps.

Target Audience: Data scientists, project leaders, analyst, data engineers, project sponsors, business leaders
Prerequisites: None
Level: Professional

Extended Abstract:
DataOps is the next generation of analytics delivery that addresses the major issues of analytics technical debt, reuse of code, data quality, and incorporates the flexibility of agile. This session defines what DataOps is (it is not DevOps) and highlights the best practices to address some of the largest challenges preventing analytics teams from delivering analytics value. The session outlines how DataOps is used to inspire teamwork, improve data quality, incorporate agile delivery, and establish an environment for reuse and innovation.

With over 20 years of experience, Dr. Larson is an active practitioner and academic focusing on BI, data warehousing, analytics, and AI. Dr. Larson completed her doctorate in management in information technology leadership. She holds Project Management Professional (PMP) and Certified Business Intelligence Professional (CBIP) certifications. Dr. Larson attended AT&T Executive Training at Harvard Business School in 2001, focusing on IT leadership. She is a regular contributor to TDWI publications and presents several times a year at conferences. Dr. Larson is principal faculty at City University of Seattle in the United States.
Deanne Larson
Deanne Larson
Vortrag: Mi 4.4
flag VORTRAG MERKEN

Vortrag Teilen

Zurück