Custom Partitioner in Kafka Using Scala: Take Quick Tour! – DZone Big Data | xxxCustom Partitioner in Kafka Using Scala: Take Quick Tour! – DZone Big Data – xxx
菜单

Custom Partitioner in Kafka Using Scala: Take Quick Tour! – DZone Big Data

十一月 30, 2019 - MorningStar

Over a million developers have joined DZone.

{{announcement.body}}

{{announcement.title}}

Let’s be friends:

1024)” dc-slot=”ads.sl1.slot(articles[0], 0)” tags=”ads.sl1.tags(articles[0], 0)” size=”ads.sl1.size(articles[0], 0)” style=”border:0px;”>
1 && !articles[0].partner.isSponsoringArticle && (width > 1024)” dc-slot=”ads.sb2.slot(articles[0], 0)” tags=”ads.sb2.tags(articles[0], 0)” size=”ads.sb2.size(articles[0], 0)”>

Custom Partitioner in Kafka Using Scala: Take Quick Tour!

DZone ‘s Guide to

Custom Partitioner in Kafka Using Scala: Take Quick Tour!

In this article, we discuss when Kafka’s default partitioner is not enough and how to build a custom partitioner in Kafka using Scala.

Jan. 07, 20 · Big Data Zone ·

Free Resource

Join the DZone community and get the full member experience.

Join For Free

Custom Partitioner in Kafka Using Scala: Take Quick Tour! - DZone Big Data

In this blog, we are going to explore the Kafka partitioner. We will try to understand why the default partitioner is not enough and when you might need a custom partitioner. We will also look at a use case and create code for the custom partitioner. I’m assuming that you have sound knowledge of Kafka. Let’s understand the behavior of the default partitioner.

The default partitioner follows these rules:

  1. If a producer provides a partition number in the message record, use it.
  2. If a producer doesn’t provide a partition number, but it provides a key, choose a partition based on a hash value of the key.
  3. When no partition number or key is present, pick a partition in a round-robin fashion.

So, you can use the default partitioner in three scenarios:

  1. If you already know the partition number in which you want to send a message record then use the first rule.
  2. When you want to distribute data based on the hash key, you will use the second rule of default partitioner.
  3. If you don’t care about which partition message record will be stored, then you will use the third rule of default partitioner.

There are two problems with the key:

  1. If the producer provides the same key for each message record then hashing will give you the same hash number, but it doesn’t ensure that if you provide two different keys, then it will never give you the same hash number. 
  2. The default partitioner uses the hash value of the key and the total number of partitions on a topic to determine the partition number. If you increase partition number, the default partitioner will return different numbers even if you provide the same key.

Now, you might have questions about how to solve this problem.

The answer to this question is very simple: you can implement your algorithm based on your requirements and use it in the custom partitioner.

You may also like: Kafka Internals: Topics and Partitions.

Kafka Custom Partitioner Example

Let’s create an example use-case and implement a custom partitioner. Try to understand the problem statement with the help of a diagram.

Custom Partitioner in Kafka Using Scala: Take Quick Tour! - DZone Big Data

Assume we are collecting data from different departments. All the departments are sending data to a single topic named department. I planned five partitions for the topic. But, I want two partitions dedicated to a specific department, named IT, and the remaining three partitions for the rest of the departments. How would you achieve this?

You can solve this requirement, and any other type of partitioning needs by implementing a custom partitioner.

Kafka Producer

Let’s look at the producer code.

Scala

x

1

package com.knoldus

2

3

import java.util.Properties

4

import org.apache.kafka.clients.producer._

5

6

object KafkaProducer extends App {

7

  val props = new Properties()

8

  val topicName = "department"

9

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

10

package com.knoldus

0

11

package com.knoldus

1

12

package com.knoldus

2

13

package com.knoldus

3

14

package com.knoldus

4

15

package com.knoldus

5

16

package com.knoldus

6

17

package com.knoldus

7

18

package com.knoldus

8

19

package com.knoldus

9

20

0

21

1

22

2

23

3

24

4

25

5

26

6

27

7

28

8

29

9

30

import java.util.Properties

0

9

import java.util.Properties

1

The first step in writing messages to Kafka is to create a producer object with the properties you want to pass to the producer. A Kafka producer has three mandatory properties, as you can see in the above code:

  1. bootstrap.serversz: port pairs of Kafka broker that the producer will use to establish a connection to the Kafka cluster. It is recommended that you should include at least two Kafka brokers because if one Kafka broker goes down, then the producer will still be able to connect Kafka cluster.
  2. Key.serializer: Name of the class that will be used to serialize key.
  3. value.serializer: Name of the class that will be used to serialize a value.

If you look at the rest of the code, there are only three steps:

  1. Create a KafkaProducer object.
  2. Create a ProducerRecord object.
  3. Send the record to the broker.

That is all that we do in a Kafka Producer.

Kafka Custom Partitioner

We need to create our class by implementing the Partitioner Interface. Your custom partitioner class must implement three methods from the interface.

  1. Configure.
  2. Partition.
  3. Close.

Let’s look at the code.

Scala

import java.util.Properties

2

1

31

 

1

import java.util.Properties

3

2

import java.util.Properties

4

3

import java.util.Properties

5

4

import java.util.Properties

6

5

import java.util.Properties

7

6

import java.util.Properties

8

7

import java.util.Properties

9

8

import org.apache.kafka.clients.producer._

0

9

import org.apache.kafka.clients.producer._

1

10

import org.apache.kafka.clients.producer._

2

11

import org.apache.kafka.clients.producer._

3

12

import org.apache.kafka.clients.producer._

4

13

import org.apache.kafka.clients.producer._

5

14

import org.apache.kafka.clients.producer._

6

15

import org.apache.kafka.clients.producer._

7

16

import org.apache.kafka.clients.producer._

8

17

import org.apache.kafka.clients.producer._

9

18

0

19

1

20

2

21

3

22

4

23

5

24

6

25

7

26

8

27

9

28

object KafkaProducer extends App {

0

29

object KafkaProducer extends App {

1

30

object KafkaProducer extends App {

2

31

object KafkaProducer extends App {

3

configure and close methods are used for initialization and clean up. In our example, we don’t have anything to clean up and initialize.

The partition method is the place where all the action happens. The producer will call this method for each message record.input to this method is key, topic, cluster details. we need to do is to return an integer as a partition number. This is the place where we have to write our algorithm.

Algorithm

Let’s try to understand the algorithm that I have implemented. I am applying my algorithm in four simple steps.

  1. The first step is to determine the number of partitions and reserve 40% of it for the IT department. If I have five partitions for the topic, this logic will reserve two partitions for IT. The next question is, how do we get the number of partitions in the topic?

    We got a cluster object as an input, and the method, partitionsForTopic, will give us a list of all partitions. Then, we take the size of the list. That’s the number of partitions in the Topic. Then, we set IT as 40% of the number of partitions. So, if I have five partitions, IT should be set to 2.

  2. If we don’t get a message Key, throw an exception. We need the Key because the Key tells us the department name. Without knowing the department name, we can’t decide that the message should go to one of the two reserved partitions or it should go to the other three partitions.
  3. The next step is to determine the partition number. If the Key = IT, then we hash the message value, divide it by 2 and take the mod as partition number. Using mod will make sure that we always get 0 or 1.
  4. If the Key != IT, then we divide it by 3 and again take the mod. The mod will be somewhere between 0 and 2. So, I am adding 2 to shift it by 2

Kafka Consumer

Let’s look at the consumer code.

Scala

object KafkaProducer extends App {

4

1

33

 

1

object KafkaProducer extends App {

5

2

object KafkaProducer extends App {

6

3

object KafkaProducer extends App {

7

4

object KafkaProducer extends App {

8

5

object KafkaProducer extends App {

9

6

  val props = new Properties()

0

7

  val props = new Properties()

1

8

  val props = new Properties()

2

9

  val props = new Properties()

3

10

  val props = new Properties()

4

11

  val props = new Properties()

5

12

  val props = new Properties()

6

13

  val props = new Properties()

7

14

  val props = new Properties()

8

15

  val props = new Properties()

9

16

  val topicName = "department"

0

17

  val topicName = "department"

1

18

  val topicName = "department"

2

19

  val topicName = "department"

3

20

  val topicName = "department"

4

21

  val topicName = "department"

5

22

  val topicName = "department"

6

23

  val topicName = "department"

7

24

  val topicName = "department"

8

25

  val topicName = "department"

9

26

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

0

27

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

1

28

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

2

29

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

3

30

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

4

31

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

5

32

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

6

33

  props.put("bootstrap.servers", "localhost:9092,localhost:9093")

7

 
A Kafka consumer has three mandatory properties as you can see in the above code:

  1. bootstrap.servers: port pairs of Kafka broker that the consumer will use to establish a connection to the Kafka cluster.it is recommended that you should include at least two Kafka brokers because if one Kafka broker goes down then the consumer will still be able to connect Kafka cluster.
  2. key.deserializer: Name of the class that will be used to deserialize key.
  3. value.deserializer: Name of the class that will be used to deserialize a value.

If you look at the rest of the code, there are only two steps:

  1. Subscribe to the topic.
  2. Consume messages from the topic.

That is all that we do in a Kafka Consumer.

I hope you enjoy this blog. You can now create a custom partitioner in Kafka using scala. If you want the source code, please feel free to downloadit.

Thanks for reading!

References

1.https://kafka.apache.org/documentation/.

2.https://docs.confluent.io/.

Topics:
kafka ,kafka producer ,scala ,kafka producer api ,big data ,tutorial

Opinions expressed by DZone contributors are their own.

Big Data Partner Resources

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.linkDescription }}

{{ parent.urlSource.name }}

· {{ parent.articleDate | date:’MMM. dd, yyyy’ }} {{ parent.linkDate | date:’MMM. dd, yyyy’ }}



Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51