Please note:
On this site, there is only displayed the English speaking sessions of the TDWI München digital. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of TDWI München digital correspond to Central European Time (CET).
By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
For the work we do in the long-term-vision-labs and strategy team for the Technology & Innovation board area at SAP is one of the playgrounds for the title question. These topics are good, because they generate exiting and abundant questions, that we believe should be asked by everyone.
How does the future work, the mechanics of change? What is the future of human work and our interactions with machines and each other? Is industrialization of space and 3d printed organs really a “thing” and how do you relate to these? What are the methods to innovate desirable futures? How do you structure such a vast and complex fabric of possibilities in your domain, team, business, boardroom? How long is the history of the future and how does that affect our fear and counter reactions to it in socio-political movement? Most importantly, who designs the future, that you your community will live in?
We talk about change, but what are the mechanics and the dynamics behind it? How fast is it? What it means to be an innovator, is transforming faster than before, from classic product and service definition of the superficial to computational system design of everything, including social and political systems, deeply rooted in a space of challenges and promises between cutting edge tech and humanism. In an exponential and converging digital-fueled future, we design a relationship, a behaviour, that the product will follow.
What comes before strategy? We love to act on strategies, derive tactics, execute and operate, it is our psychological bias, we love to do what we are good at. But how do we derive a strategy? From what point of view, what narratives, what builds up a “Future fabric” that these narratives a woven from? And who does it in your enterprise? We will have, high level look at how we build a desirable future with these questions in mind, and also what we in our labs consider to be a desirable future of work, at least for the coming decade.
This exponential change is our innovation brief and the stakes are high, It is just too important to be left only to... any single team. Technology is the human evolution, the prior named HuMachine creates a playground for “Human centered“-adventure, this opens new worlds for our imagination in a time when “now” has never been so temporary. Bringing these thoughts together we need to answer the question “What is human, and what is work in a superhuman future?”
Main chapters
Over the last few decades, ETL and specially Datawarehouse testing has been gaining quite a bit of traction. The reason for that traction? The modern enterprise's dependency on data. This dependency calls for the right testing strategy to ensure the quality and correctness of the provided data. Further on the drive to lower 'time to data' and overall costs is putting high pressure on Test specialists to increase efficiency in this area. In this presentation I want to show you our journey to daily regression in our enterprise data warehouse.
Target Audience: Testers, Test Managers, Project Leaders, Decision Makers, Data Engineers, Managers
Prerequisites: none
Level: Basic
Extended Abstract:
Over the last few decades, ETL and specially data warehouse testing has been gaining quite a bit of traction. Especially given the emergence of Agile and DevOps as top trends in the software development industry. The reason for that traction? The modern enterprise's dependency on data.
This dependency calls for the right testing strategy and execution to ensure the quality and correctness of the provided data.
Further on the drive to lower 'time to data' and overall costs is putting high pressure on Test specialists to increase efficiency in this area.
In this presentation I want to show you our journey to daily regression in our data warehouse.
We improved our processes and changed our testing approach. We started with a quarterly regression test which took a whole testing team of 10 people 2 months to finish.
Now we do the same on a daily basis within our agile team.
Big data is not the biggest change in the IT industry but data usage. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. Unfortunately, it is hard for current data delivery systems to support new styles of data usage, such as data science, real-time data streaming analytics, and managing heavy data ingestion loads. This session discusses a real big-data-ready data architecture.
Target Audience: Data architects, data warehouse designers, IT architects, analysts, enterprise architects, solutions architects
Prerequisites: Some understanding of data architectures
Level: Advanced
Extended Abstract:
It is not big data that is the biggest change in the IT industry, it is data usage that has changed most drastically the last ten years. To become more data driven and to succeed with their digital transformation, organizations are using their data more extensively to improve their business and decision processes. This may mean that more data needs to be collected resulting in big data systems, but the goal remains to do more with data. Unfortunately, current data delivery systems don't have the right characteristics for supporting the requirements of big data systems, such as supporting data science, handling massive data streaming workloads, managing heavy data ingestion loads, and so on. Additionally, many organizations that are developing big data systems, are developing isolated data delivery systems, such as an isolated data lake, an isolated data streaming system, an isolated data services system, and so on. This avalanche of data delivery systems is not beneficial to the organization. The wheel is invented over and over again. It's time to design real big-data-ready data architectures. This session discusses such an architecture, including the supporting technologies, pros and cons.
Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.
Data volumes are exploding, and companies are striving to use advanced analytics for more data-driven insights and self-learning systems. Enabling scalable data onboarding and analytics delivery processes with little human intervention but strong governance is key to extract value from Big Data and Analytics successfully. The CC CDQ has developed a framework for governing Big Data and Analytics in close collaboration with industry partners. The framework supports practitioners to setup processes, roles and tools to scale analytics use cases.
Target Audience: Analytics Manager, Data Manager, Project Leader
Prerequisites: Basic knowledge and some experience in analytics or data management
Level: Basic
Using distributed databases across various microservices will be explained based on a research project example. The presentation elaborates of how to achieve data consistency across microservices, how to communicate using message brokers, how to scale the microservices and achieve high application availability. Container virtualization and orchestration are technology basis for the solution. The project example shows of how to build an Artificial Intelligence (AI) solution – as a service!
Target Audience: Data Engineer, Data Scientist, Project leader, Architects
Prerequisites: Common understanding of database technology and architecture
Level: Advanced
Extended Abstract:
Developing microservices have various advantages compared to traditional monolith approach. Loose coupled microservices enables smaller teams to develop independently and using CI/CD at a daily basis. Microservices can be scaled independently using different database technologies optimised for particular use cases. The microservices are fault isolated so particular failures will not result in an overall outage.
But how to ensure data consistency for all distributed databases across the microservices? How to react at particular failures? And how to interact and communicate between services?
A research project for intelligent energy analysis will be presented. The solution realizes an Artificial Intelligence (AI) solution analyzing streaming data near real time to ensure energy savings in a production environment. The presentation will explain the steps necessary to establish a microservice environment for Artificial Intelligence and Machine Learning. Central logging guarantees operations monitoring across all microservices. Dashboards presents the results to technical staff monitoring the Machine Learning libraries as well as to the process owner of the domain, e.g. the operations manager or insurance agent. The solution is based on Docker as container virtualisation technology and Kubernetes for container orchestration. In the research project, the solution is realized on premise, but can be easily deployed in the cloud.
Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, democratizing data, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it effectively and efficiently. In this session lessons learned, tips and tricks, do's and don'ts, and guidelines are discussed. And what are the biggest pitfalls? In short, the expertise gathered from numerous projects is discussed.
Target Audience: Data architects, data warehouse designers, IT architects, analysts, solutions architects
Prerequisites: Some understanding of database technology
Level: Advanced
Extended Abstract:
Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it properly. In this practical session the lessons learned are discussed. Tips and tricks, do's and don'ts, and guidelines are discussed. In addition, answers are given to questions, such as: How does data virtualization work and what are the latest developments? What are the reasons to choose data virtualization? How to make the right choices in architecture and technology? How does data virtualization in your application landscape? Where and how do you start with the implementation? And what are the biggest pitfalls in implementation? In short, the expertise gathered from numerous projects is discussed.
Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. He has presented countless seminars, webinars, and keynotes at industry-leading conferences. He assists clients worldwide with designing new data architectures. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.
As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.
Target Audience: CDO, Data Engineer, CIO, Data Scientist, Data Architect, Enterprise Architect
Prerequisites: Understanding of data integration software and development using a CI/CD DevOps approach
Level: Advanced
Extended Abstract:
As the data landscape becomes more complex with many new data sources, and data spread across data centres, cloud storage and multiple types of data store, the challenge of governing and integrating data gets progressively harder. The question is what can we do about it? This session looks at data fabric and data catalogs and how you can use them to build trusted re-usable data assets across a distributed data landscape.
Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.
How do you enable digital transformation and create value through analytics?
Building a global analytics function across a diverse application landscape incl. SAP and multiple data sources provides many challenges. See how ABB successfully managed this journey and now enjoys the benefits of operational analytics globally, a shift in mindsets and a more data driven way of working.
You will also discover the impact of key technologies used (Change-Data-Capture, Automation & AI) and see real examples of specific analytics deployed in the business.
Target Audience: Business analysts, decision makers, C-Level
Prerequisites: Basic Knowledge
Level: Basic
Extended Abstract:
How do you enable digital transformation and create value through analytics?
This session tells the story of building a global analytics function in an environment with a diverse set of applications including a complex SAP system landscape and many data sources. The speaker will talk about some of the challenges in the journey but also the success in deploying operational analytics globally, shifting mindsets and help transition to a more digital/data driven way of working.
The audience will also discover the impact of key technologies used (e.g.: Change-Data-Capture, Visualization, Automation and AI) and how these helped to create value and drive revenue increase for ABB, using real examples of specific analytics deployed in the business.
A successful data culture brings together people from across the business to collaborate, share and learn from each other about data, analytics and the business value data can hold. In this session I share my suggestions for bringing together your most talented data people so your organization can gain more value from their skills and the data assets you invested in.
Come with an open mind and leave with tips, tricks and ideas to get started straight away.
Target Audience: Data practitioners, analysts, CoE leaders, Management
Prerequisites: basic understanding of how businesses use data
Level: Basic
Eva Murray ist Snowflake’s Lead Evangelist für die EMEA Region und anerkannte Expertin im Bereich Datenvisualisierung. Sie hat vier Bücher veröffentlicht, unter anderem zu den Themen wie man Data Communities aufbaut und wie man die Investitionen in Dateninfrastruktur im Profi-Fussball optimiert.
Bei Snowflake hilft Eva Murray Unternehmen, ihre Mitarbeiter für Daten und Technologien zu begeistern und dadurch Aspekte der Datenstrategie, wie z.B. Datenkultur, Datendemokratisierung und Datenkompetenzen zu verwirklichen.
Eva Murray ist seit vielen Jahren aktiver Teil der Datenanalyse- und Data Visualisierungs-Community und engagiert sich für Frauen in Data- und Tech-Berufen.
Metadata is a long-time favourite at conferences. However, most attempts to implement real metadata solutions have stumbled. Recent data quality and integrity issues, particularly in data lakes, have brought the topic to the fore again, often under the guise of data catalogues. But the focus remains largely technological. Only by reframing metadata as context-setting information and positioning it in a model of information, knowledge, and meaning for the business can we successfully implement the topic formally known as metadata.
Target Audience:
- Enterprise-, Systems-, Solutions and Data Architects
- Systems-, Strategy and Business Intelligence Managers
- Data Warehouse and Data Lake Systems Designers and Developers
- Data and Database Administrators
- Tech-Savvy Business Analysts
Prerequisites: Basic knowledge of data management principles. Experience of designing data warehouses or lakes would be useful
Level: Basic
Extended Abstract:
Metadata has – quite correctly – been at the heart of data warehouse thinking for three decades. It's the sort of cross-functional and overarching topic that excites data architects and data management professionals. Why then can so few businesses claim success in implementing it? The real question is: implementing what?
This session answers that question by first refining our understanding of metadata, which has been made overly technical over the years and more recently been misappropriated by surveillance-driven organisations. With this new understanding, we can reposition metadata not as a standalone topic but as a vital and contiguous subset of the business information that is central to digital transformation. With this positioning, 'metadata implementation' will become a successful – but largely invisible – component of information delivery projects.
In this session, the ERGO Group, one of Europe's leading insurance companies, presents their AI Factory for development and operationalization of AI models. The session gives an architectural overview of the AI Factory's components. Furthermore, it explains how cloud-native technologies like Openshift and AWS Cloud Services aided in moving towards a data driven organization. A deep dive into the AI Factory's data ingestion process shows how metadata-driven data ingestion supports Data Governance in an enterprise context.
Target Audience: AI Leader, Insurance, Decision Maker, Data Engineer
Prerequisites: Background knowledge AI, Big Data Technologies, BIA
Level: Advanced
Extended Abstract:
In times of Advanced Analytics and AI, enterprises are striving towards automated and operationalized analytics pipelines.
In this session, ERGO and saracus consulting present the ERGO Group AI Factory. In particular, the presentation retraces how ERGO – in collaboration with saracus consulting – evolved from an on-premises analytics environment to an automated AI-Ops environment running on modern technologies within the AWS Cloud.
To this end, strategic aspects of delivering AI as a service as well as important components for delivering automated AI Pipelines in enterprises are highlighted.
Furthermore, the speakers take a deep dive into the technical aspects of the AI Factory's metadata driven data ingestion pipeline, emphasizing how it supports the key functionalities for Data Governance within ERGO's Data Strategy.
Digital Welcome Reception
Like many companies, the 3 banks face the challenge of implementing data governance. With an end-to-end approach for metadata – from the business definition to the DWH implementation – a basis was created for this. The use cases 'IT requirements', 'data quality' and 'data definitions' were the focus of the resource-saving project. The target groups for the metadata are primarily the LoB, especially risk management, but also IT.
Target Audience: Data Governance Manager, Risk Manager, Data Quality Manager, Data Warehouse Architects, Data Modeler
Prerequisites: Basic knowledge
Level: Basic
Clemens Bousquet ist Risikomanager in der Oberbank, die Teil der österreichischen 3-Banken-Gruppe ist. Sowohl in einer Leitungsfunktion des Risikomanagements als auch in zahlreichen Projekten hat er eine hohe Expertise in der fachlichen Datenmodellierung, der Einführung von Data Governance, aber auch der Umsetzung von BCBS 239 und IFRS 9 aufgebaut. Zuvor hat er die Diplomstudien in Volkswirtschaftslehre und Internationale Wirtschaftswissenschaften absolviert.
Lisa Müller ist Senior Consultant bei dataspot. Seit mehreren Jahren setzt sie sich im Projektgeschäft - besonders im Bankwesen - mit Fragestellungen an der Schnittstelle zwischen Fachbereich und Technik auseinander und erarbeitet kundenindividuelle Lösungen. Ihre exzellente fachliche Expertise kommt dabei im gesamten Lebenszyklus von Metadaten und Data Governance zur Geltung, wobei sie besonderen Wert auf die fachliche Datenmodellierung und das Metadatenmanagement legt. Neben zwei abgeschlossenen Bachelorstudien hält sie einen Master in Betriebswirtschaft.
I will show the journey that Tires took from its first attempts to extend their BI services for analytics to operate a mission critical industrialization environment which runs several AI projects. Beneath the problems and obstacles that were taken I will also explain the decisions that were taken and show the industrialization environment which was created. I also will explain why it was necessary to have such an environment instead of making use of classical BI tools.
Target Audience: Project leader, decision makers, CIO, Software engineers
Prerequisites: None
Level: Basic
Extended Abstract:
When Continental Tires started to establish Data Science as a function it decided to grow this out of BI but with an external employee. The journey was a long one but due to the given experience Continental was able to leave out some of the painful experiences that were made in other companies. So they decided to build up an infrastructure for industrialization of Use Cases from the first minute. The example of Tires is a good one to understand the roadmap from BI to Data Science industrialization. While in the beginning there was the hope that a simple extension of BI and BI tools would deliver an answer today the organization is a complete different one. Also Tires created an own landscape for industrialization of AI. Why this is done so and why this might be useful will be shown in the talk.
In the past, data was often stored in a monolithic data warehouse. Recently, with the advent of big data, there has been a shift to work directly with files. The challenge therefore arises in data management and storing metadata information. In this presentation, I will show how SAP (ERP or BW) data can be extracted using SAP Data Intelligence (ODP framework) and stored along with their metadata information. These data are stored in a Common Data Model (CDM) format and can be easily integrated and consumed with various products.
Target Audience: Professionals who would like to integrate SAP data into a Data Platform (e.g. Datalake) and include metadata information.
Prerequisites: Basic understanding of the SAP integration framework ODP, and cloud infrastructure (e.g. Azure Datalake Storage)
Level: Advanced
Julius von Ketelhodt hat Geophysik und Geoinformationswissenschaften an den Universitäten Witwatersrand (Südafrika) und Freiberg studiert und in Geophysik und Seismologie promoviert.
Seit mehreren Jahren beschäftigt er sich in Data & Analytics Projekten von Großkunden unterschiedlichster Industrien explizit mit der Fragestellung, wie betrachtungsrelevante Daten aus verschiedenen SAP Quellsystemen in Cloud Data Plattformen führender Hersteller integriert werden können. Zudem hat er das Konzept des Common-Data-Models (CMD) in Zusammenarbeit mit SAP, Microsoft und Adobe aus der Taufe gehoben und weiterentwickelt.
Beim Data & AI Beratungshaus initions ist er seit knapp 4 Jahren als BI-Consultant tätig und zeichnet gleichzeitig als Product Lead im hauseigenen SAP Product Team verantwortlich.
Julius von Ketelhodt ist verheiratet und lebt mit seiner Frau und seinen zwei Kindern in Hamburg.
Quantifying the impact of customer experience (CX) improvements on the financials is crucial for prioritizing and justifying investments. In telecommunication as well as other subscription-based industries, churn is one of or the most important financial aspects to take into account. The presented approach shows how the churn impact of CX improvements – measured via Net Promoter Score (NPS) – can be estimated based on structural causal models. It makes use of algorithms for causal discovery and counterfactual simulation.
Target Audience: Data Scientist, Decision Maker
Prerequisites: basic understanding of statistical modeling
Level: Advanced
Transforming an organization to become more data driven usually presents a set of technological challenges. In this session you will learn how to integrate existing applications data in real time to achieve a useful critical mass of explorable data.
Target Audience: CTOs, CIOs, CDOs, data engineers, data modelers, data analysts, data scientists
Prerequisites: General understanding of data in the context of the enterprise
Level: Basic
Extended Abstract:
'Data is the new oil' – these days many organizations are in the midst of a digital transformation. In order to enable data driven processes, it is imperative to leverage the treasure trove of data hidden away in already existing applications. Extracting data from those applications (the E in ETL) is commonly the biggest pain point of data engineering. Many systems were not built to be extracted, so therefore data engineering needs to carefully balance performance versus load. At the same time customers demand more: nightly batch processing is not good enough anymore, the need for real time information is growing.
How to align all of this? Enter Change Data Capture (CDC). Based on a real live use case, you will learn how to integrate data from an existing mission critical SAP application into a modern cloud-based analytics warehouse. We will tell the story of our journey from the initial use cases, constraints that we faced, discoveries along the way to the final design and our appetite for more.
The past year or two has seen major changes in vendor support for the extended Hadoop ecosystem, with withdrawals, collapses, and mergers, as well as de-emphasis of Hadoop in marketing. Some analysts have even declared Hadoop dead. The reality is more subtle, as this session shows, through an exploration of Hadoop's strengths and weaknesses, history, current status and prospects. Discussion topics include plans for initiating new Hadoop projects and what to do if you have already invested, successfully or otherwise, in data lakes.
Target Audience:
Enterprise-, systems-, solutions- and data architects
Systems-, strategy- and business intelligence managers
Data warehouse and data lake systems designers and developers
Data and database administrators
Prerequisites:
Basic knowledge of data management principles
Experience of designing data warehouses or lakes would be useful
Level: Basic
Extended Abstract:
Hadoop and its extended menagerie of related projects have been responsible for a decade of reinvention in the data management world. On the positive side, projects that would have been impossible with traditional technology have become not only possible but commonplace in today's environment. Hadoop has been at the heart of the explosion of digital business.
However, Hadoop has also been beset by a long list of complaints and problems, particularly relating to data management and systems management. Some of Hadoop's early strengths have been eroded as relational database vendors have upped their game. Furthermore, the rapid growth of Cloud solutions to many massive data delivery and processing projects has impacted Hadoop's commercial viability.
All these factors have led to a questioning of the future of the extended Hadoop ecosystem. This session answers these questions, not just from a technological viewpoint, but in the broader context of existing investments in skills and infrastructure, the direction of digital transformation, and the changing socio-political environment in which business operates.
ZF plant Saarbrücken, Germany, manufactures around 11,000 transmissions per day. With 17 basic transmission types in 700 variants, the plant manages a large number of variants. Every transmission consists of up to 600 parts. An AI project was started to get reliable + fast results on root cause discovery. Speed is important because production runs 24 hours/7 days a week. The target is to reduce waste in certain manufacturing domains by 20%. The key success factor is the fast detection mechanism within the production chain delivered by AI.
Target Audience: Production manager, quality manager, CDO, CIO
Prerequisites: none
Level: Basic
Extended Abstract:
The ZF plant Saarbrücken, Germany, manufactures around 11,000 transmissions per day. With 17 basic transmission types in 700 variants, the plant manages a large number of variants. Every transmission consists of up to 600 parts. Each transmission is 100% tested in every technical detail before shipment. The plant Saarbrücken is a forerunner and lead plant in innovative Industry 4.0 technologies. Therefore, activities were started to tackle one significant challenge, which is caused by the enormous variant diversity: Finding root-causes for unsuccessful end of line testing. The management of the complexity is a big challenge because transmission parts can be produced in a huge number of variant processes. Process experts from each domain, like quality and testing, assembly departments and manufacturing units, had to spend significant time in analyzing influencing factors for malfunctioning and deciding on best action to prevent end of line test failures.
Therefore, an AI project was started with the objective to get reliable and fast results on root cause discovery. Speed is important because production runs 24 hours / 7 days a week. The sooner the real reasons for malfunctions are discovered, the sooner activities can be implemented to avoid bad quality. This saves a lot of time and reduces significant waste. The Target is to reduce waste in certain manufacturing domains by 20%. The key success factor is the fast detection mechanism within the production chain delivered by AI.
Complex root-cause findings can be reduced from several days to hours.
ZF's intention with the digitalization approach is to deliver fast information to the people who are responsible for decision processes to keep a plant in an optimal output with high quality products. A self-learning AI solution Predictive Intelligence from IS Predict was used to analyze complex data masses from production, assembly, and quality to find reliable data patterns, giving transparency on disturbing factors/factor combinations. For training algorithms, end to end tracing data was used, made available in a data lake.
Britta Hilt beschäftigt sich seit 2011 mit der Anwenderseite von Künstlicher Intelligenz. Sie ist Mitbegründerin und Geschäftsführerin der KI-Firma IS Predict, die sich einen Namen gemacht durch ihre Automatisierung von Data Science sowie durch erklärende KI. Vor der Gründung von IS Predict war sie über 15 Jahre in einer internationalen IT-Firma (IDS Scheer / Software AG) tätig, zuletzt als Director verantwortlich für Product Management und Solution Marketing.
This session looks at how data science workbenches and machine learning automation tools can help business analysts to become data scientists and so meet the demand of business.
Target Audience: CDO, Head of Analytics, Data Scientist, Business Analysts, CIO
Prerequisites: Basic understanding of Data Science
Level: Advanced
Extended Abstract:
The demand for analytics is now almost everywhere in the business. Analytics are needed in sales, marketing and self-service, finance, risk, operations, supply chain and even HR. However, the current shortage of data scientists and the reliance on detailed skills such as programming, has led many corporate executives to question current approaches to development of high value analytical models and ask if they can be accelerated in any way to improve agility and reduce time to value. This session looks at this problem in detail and at how emerging data science workbenches and machine learning automation tools can help reduce the reliance on highly skilled data scientists and allow business analysts to become data scientists and so meet the demand of business.
o The explosion in demand for analytics
o Data science and the modern analytical ecosystem
o Challenges with current approaches to analytics
o Requirements to reduce time to value and accelerate development of analytical models
o Improving productivity by integrating Information catalogs and data science workbenches, e.g. Amazon SageMaker, Cloudera CDP Machine Learning, IBM Watson Studio Microsoft, Azure ML Service,
o Accelerating model development, monitoring and model refresh using ML automation tools, e.g. DataRobot, SAS, Dataiku Data Science Studio, Big Squid
o Facilitating rapid analytics deployment via analytics as a service to maximise effectiveness and competitive edge
Mike Ferguson is Managing Director of Intelligent Business Strategies and Chairman of Big Data LDN. An independent analyst and consultant, with over 40 years of IT experience, he specialises in data management and analytics, working at board, senior IT and detailed technical IT levels on data management and analytics. He teaches, consults and presents around the globe.
The engagement of IT staff in organizations has been done for decades via a single function or department. Whatever title it bears, the single counter IT takes care of everything under the digital sun. This model generates unhealthy behaviors in the IT ranks that are detrimental to the enterprises that need digital to operate, evolve, transform —or survive.
Drawing a parallel with a more mature industry, the current distribution of roles is analyzed and compared. It shows that the standard structure is creating conflicts of roles that would be unacceptable —and in some cases illegal— in many other fields of work.
The typical IT engagement model in organizations has a direct effect on what constitutes success and how it is measured. These measures -in their current state- create a ripple effect on quality and value of digital investments.
You should come to the inevitable conclusion: it is more than time to radically re-think how technology teams engage in organizations.
Analytics team are struggling to create, publish, and maintain analytics to meet demand. Many analytics projects fail to meet expectations and deliver value. DataOps is the new approach combining tools and approaches to simplify the development of analytics and ensuring high quality data. DataOps shortens the life cycles, reduces technical debit and increases analytics success. This session covers the best practices for the analytics team to deliver DataOps.
Target Audience: Data scientists, project leaders, analyst, data engineers, project sponsors, business leaders
Prerequisites: None
Level: Professional
Extended Abstract:
DataOps is the next generation of analytics delivery that addresses the major issues of analytics technical debt, reuse of code, data quality, and incorporates the flexibility of agile. This session defines what DataOps is (it is not DevOps) and highlights the best practices to address some of the largest challenges preventing analytics teams from delivering analytics value. The session outlines how DataOps is used to inspire teamwork, improve data quality, incorporate agile delivery, and establish an environment for reuse and innovation.
How is machine learning used in the real world? How did our customer Liebherr mitigate the problem of unreliable suppliers, therefore making their manufacturing process more efficient? How can our customers work their way through thousands of documents, quickly identifying the relevant ones? In this talk, Data Scientist Björn Heinen will elaborate on his past and current projects in the manufacturing industry by explaining how tangible customer problems have been solved using classical machine learning algorithms, computer vision and more.
Target Audience: The session is intended to reach those interested who are working in the machine and system engineering industry or those in other sectors who would like to optimize their processes with the help of AI and use business data successfully in the long term.
Prerequisites: No prerequisites needed.
Level: Basic