Kafka Stream (KStream) vs Apache Flink – DZone Big Data | xxxKafka Stream (KStream) vs Apache Flink – DZone Big Data – xxx
菜单

Kafka Stream (KStream) vs Apache Flink – DZone Big Data

二月 29, 2020 - MorningStar

Over a million developers have joined DZone.

  • Kafka Stream (KStream) vs Apache Flink - DZone Big Data

    {{node.title}}

    {{node.type}} · {{ node.urlSource.name }} by

    Download {{node.downloads}}

  • {{totalResults}} search results

{{announcement.body}}

{{announcement.title}}

Let’s be friends:

1024)” dc-slot=”ads.sl1.slot(articles[0], 0)” tags=”ads.sl1.tags(articles[0], 0)” size=”ads.sl1.size(articles[0], 0)” style=”border:0px;”>
1 && !articles[0].partner.isSponsoringArticle && (width > 1024)” dc-slot=”ads.sb2.slot(articles[0], 0)” tags=”ads.sb2.tags(articles[0], 0)” size=”ads.sb2.size(articles[0], 0)”>

Kafka Stream (KStream) vs Apache Flink

DZone ‘s Guide to

Kafka Stream (KStream) vs Apache Flink

In this article, we take a look at two of the most popular stream processing frameworks, Flink and Kafka’s Stream API, to see which is right for you.

Apr. 07, 20 · Big Data Zone ·

Free Resource

Join the DZone community and get the full member experience.

Join For Free

Overview

Two of the most popular and fast-growing frameworks for stream processing are Flink (since 2015) and Kafka’s Stream API (since 2016 in Kafka v0.10). Both are open-sourced from Apache and quickly replacing Spark Streaming — the traditional leader in this space.

In this article, I will share key differences between these two methods of stream processing with code examples. There are few articles on this topic that cover high-level differences, such as [1], [2], and [3] but not much information through code examples.

In this post, I will take a simple problem and try to provide code in both frameworks and compare them. Before we start with code, the following are my observations when I started learning KStream.

Kafka Stream (KStream) vs Apache Flink - DZone Big Data

Example 1

The following are the steps in this example:

  1. Read stream of numbers from Kafka topic. These numbers are produced as string surrounded by "[" and "]". All records are produced with the same key.
  2. Define Tumbling Window of five seconds.
  3. Reduce (append the numbers as they arrive).
  4. Print to console.

Kafka Stream Code

Java

xxxxxxxxxx
1

17

 

1

static String TOPIC_IN = "Topic-IN";

2

3

final StreamsBuilder builder = new StreamsBuilder();

4

5

builder

6

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

7

.groupByKey()

8

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

9

.reduce((value1, value2) -> value1 + value2)

10

.toStream()

11

.print(Printed.toSysOut());

12

static String TOPIC_IN = "Topic-IN";

0

13

static String TOPIC_IN = "Topic-IN";

1

14

static String TOPIC_IN = "Topic-IN";

2

15

static String TOPIC_IN = "Topic-IN";

3

16

static String TOPIC_IN = "Topic-IN";

4

17

static String TOPIC_IN = "Topic-IN";

5

Apache Flink Code

Java

static String TOPIC_IN = "Topic-IN";

6

1

29

 

1

static String TOPIC_IN = "Topic-IN";

7

2

static String TOPIC_IN = "Topic-IN";

8

3

static String TOPIC_IN = "Topic-IN";

9

4

final StreamsBuilder builder = new StreamsBuilder();

0

5

final StreamsBuilder builder = new StreamsBuilder();

1

6

final StreamsBuilder builder = new StreamsBuilder();

2

7

final StreamsBuilder builder = new StreamsBuilder();

3

8

final StreamsBuilder builder = new StreamsBuilder();

4

9

final StreamsBuilder builder = new StreamsBuilder();

5

10

final StreamsBuilder builder = new StreamsBuilder();

6

11

final StreamsBuilder builder = new StreamsBuilder();

7

12

final StreamsBuilder builder = new StreamsBuilder();

8

13

final StreamsBuilder builder = new StreamsBuilder();

9

14

builder

0

15

builder

1

16

builder

2

17

builder

3

18

builder

4

19

builder

5

20

builder

6

21

builder

7

22

builder

8

23

builder

9

24

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

0

25

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

1

26

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

2

27

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

3

28

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

4

29

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

5

Differences Observed After Running Both

  1. Can’t use window() without groupByKey() in Kafka Stream; whereas Flink provides the timeWindowAll() method to process all records in a stream without a key.
  2. Kafka Stream by default reads a record and its key, but Flink needs a custom implementation of KafkaDeserializationSchema<T> to read both key and value. If you are not interested in the key, then you can use new SimpleStringSchema() as the second parameter to the FlinkKafkaConsumer<> constructor. The implementation of MySchema is available on Github.
  3. You can print the pipeline topology from both. This helps in optimizing your code. However, Flink provides, in addition to JSON dump, a web app to visually see the topology https://flink.apache.org/visualizer/.
  4. In Kafka Stream, I can print results to console only after calling toStream() whereas Flink can directly print it.
  5. Finally, Kafka Stream took 15+ seconds to print the results to console, while Flink is immediate. This looks a bit odd to me since it adds an extra delay for developers.

Example 2

The following are the steps in this example

  1. Read stream of numbers from Kafka topic. These numbers are produced as a string surrounded by    "[" and "]". All records are produced with the same key.
  2. Define a Tumbling Window of five seconds.
  3. Define a grace period of 500ms to allow late arrivals.
  4. Reduce (append the numbers as they arrive).
  5. Send the result to another Kafka topic.

Kafka Stream Code

Java

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

6

1

16

 

1

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

7

2

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

8

3

.stream(TOPIC_IN, Consumed.with(Serdes.String(), Serdes.String()))

9

4

.groupByKey()

0

5

.groupByKey()

1

6

.groupByKey()

2

7

.groupByKey()

3

8

.groupByKey()

4

9

.groupByKey()

5

10

.groupByKey()

6

11

.groupByKey()

7

12

.groupByKey()

8

13

.groupByKey()

9

14

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

0

15

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

1

16

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

2

Flink Code

Java

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

3

1

41

 

1

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

4

2

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

5

3

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

6

4

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

7

5

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

8

6

.windowedBy(TimeWindows.of(Duration.ofSeconds(5)))

9

7

.reduce((value1, value2) -> value1 + value2)

0

8

.reduce((value1, value2) -> value1 + value2)

1

9

.reduce((value1, value2) -> value1 + value2)

2

10

.reduce((value1, value2) -> value1 + value2)

3

11

.reduce((value1, value2) -> value1 + value2)

4

12

.reduce((value1, value2) -> value1 + value2)

5

13

.reduce((value1, value2) -> value1 + value2)

6

14

.reduce((value1, value2) -> value1 + value2)

7

15

.reduce((value1, value2) -> value1 + value2)

8

16

.reduce((value1, value2) -> value1 + value2)

9

17

.toStream()

0

18

.toStream()

1

19

.toStream()

2

20

.toStream()

3

21

.toStream()

4

22

.toStream()

5

23

.toStream()

6

24

.toStream()

7

25

.toStream()

8

26

.toStream()

9

27

.print(Printed.toSysOut());

0

28

.print(Printed.toSysOut());

1

29

.print(Printed.toSysOut());

2

30

.print(Printed.toSysOut());

3

31

.print(Printed.toSysOut());

4

32

.print(Printed.toSysOut());

5

33

.print(Printed.toSysOut());

6

34

.print(Printed.toSysOut());

7

35

.print(Printed.toSysOut());

8

36

.print(Printed.toSysOut());

9

37

static String TOPIC_IN = "Topic-IN";

00

38

static String TOPIC_IN = "Topic-IN";

01

39

static String TOPIC_IN = "Topic-IN";

02

40

static String TOPIC_IN = "Topic-IN";

03

41

static String TOPIC_IN = "Topic-IN";

04

Differences Observed After Running Both 

1. Due to native integration with Kafka, it was very easy to define this pipeline in KStream as opposed to Flink

2. In Flink, I had to define both Consumer and Producer, which adds extra code.

3. KStream automatically uses the timestamp present in the record (when they were inserted in Kafka) whereas Flink needs this information from the developer. I think Flink’s Kafka connector can be improved in the future so that developers can write less code. 

4. Handling late arrivals is easier in KStream as compared to Flink, but please note that Flink also provides a side-output stream for late arrival which is not available in Kafka stream.

5. Finally, after running both, I observed that Kafka Stream was taking some extra seconds to write to output topic, while Flink was pretty quick in sending data to output topic the moment results of a time window were computed.

Conclusion

  • If your project is tightly coupled with Kafka for both source and sink, then KStream API is a better choice. However, you need to manage and operate the elasticity of KStream apps.
  • Flink is a complete streaming computation system that supports HA, Fault-tolerance, self-monitoring, and a variety of deployment modes.
  • Due to in-built support for multiple third-party sources and sink Flink is more useful for such projects. It can be easily customized to support custom data sources.
  • Flink has a richer API when compared to Kafka Stream and supports batch processing, complex event processing (CEP), FlinkML, and Gelly (for graph processing).

Topics:
kafka streams ,flink ,flink api ,streaming api ,big data

Opinions expressed by DZone contributors are their own.

Big Data Partner Resources

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.linkDescription }}

{{ parent.urlSource.name }}

by

CORE

· {{ parent.articleDate | date:’MMM. dd, yyyy’ }} {{ parent.linkDate | date:’MMM. dd, yyyy’ }}


Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51