@Mahendra Sawarkar - I apologize for the confusion. Here are the answers to your questions using the Java API:
To limit the number of active clients per EventHub, you can set the maxBatchSize
property in the EventProcessorClientBuilder
. This property specifies the maximum number of partitions that a single instance of the EventProcessorClient
can process at a time. If you set this property to 32, then only 32 clients will be active per EventHub. If you try to instantiate a new client beyond this limit, an exception will be thrown.
By default, when a new client is started, it starts consuming from the latest message. However, you can change this behavior by setting the startingPosition
property in the EventProcessorClientBuilder
. If you set this property to EventPosition.earliest()
, then the new client will start consuming from the beginning of the stream. You can also manage the offset at the consumer group level by using checkpoints. When a client processes an event, it can mark the event as processed by calling the checkpoint()
method on the PartitionContext
object. This will update the checkpoint for that partition in the consumer group, and the next client that starts consuming from that partition will start from the last checkpointed event.
Duplicate alerts can occur if a new client starts consuming from the beginning of the stream. To avoid this, you can use checkpoints to manage the offset at the consumer group level. When a client starts, it can read the last checkpointed event for each partition and start consuming from there. This will ensure that there are no duplicates. The library provides a CheckpointStore
interface that you can implement to store and retrieve checkpoints. You can use an external storage like Azure Blob Storage or Azure Table Storage to store the checkpoints.
I hope this helps. Let me know if you have any further questions.