The number of sources generating continuous, streaming data has exploded in recent years and Enterprises need to process and analyze ever-growing volumes of streaming data to deliver on business critical use cases. However, building high-volume streaming data pipelines can be a challenge given the complexities of handling streaming data frequency and its processing in scalable distributed fashion. Two of the most widely used technologies together for tackling this challenge today are "Kafka" and "Spark Streaming" for building high throughput, low-latency, and fault-tolerant stream processing pipelines. Since Spark 2.0, Spark Streaming has a new high level API, Structured Streaming, a new stream processing engine built on Spark SQL, which is revolutionizing how developers could write scalable and fault-tolerant stream processing applications. In this session, we will see how to build streaming data pipelines using Spark Structured Streaming to process data from Kafka and/or send it back to Kafka, along with advancedcapabilities offered in Structured streaming for streaming complexities to make streaming applications ready for Production. We will also cover a code based demo and talk about real challenges, lessons learned in building streaming data pipelines.