Data virtualization is being adopted by more and more organization for different use cases, such as 360 degrees customer views, logical data warehouse, and self-service BI. The effect is that knowledge about this agile data integration technology is available on how to use it properly.
In this practical session the lessons learned are discussed. Tips and tricks, do's and don'ts, and guidelines are discussed. In addition, answers are given to questions, such as: How does data virtualization work and what are the latest developments? Where and how do you start with the implementation? And, what are the biggest pitfalls? In short, the expertise gathered from numerous projects is discussed.
Target Audience: BI specialists, data warehouse designers, IT architects, analysts, architects
Prerequisites: General knowledge of databases, data warehousing and BI
'In recent years the rise of big data has led to a huge increase of new technologies for processing, analyzing and storing data, such as Hadoop, NoSQL, NewSQL, GPU-databases, Spark and Kafka, but also products with revolutionary internal architectures such as SnowflakeDB and Edge Intelligence. These new technologies allow us to simplify database architectures, to process larger workloads and implement new use cases. They also change the way we should design our data architectures. For example, previously we built data warehouses independent of technology. Currently the technology heavily influences the data architecture. This shift requires that the data architects need in-depth knowledge of available technologies. During this session Rick van der Lans discusses how these new technologies affect the design of data architectures.