Link Search Menu Expand Document

Supporting Enqueue

Enqueue is an MIT-licensed open source project with its ongoing development made possible entirely by the support of community and our customers. If you’d like to join them, please consider:

Kafka transport

The transport uses Kafka streaming platform as a MQ broker.


$ composer require enqueue/rdkafka

Create context

use Enqueue\RdKafka\RdKafkaConnectionFactory;

// connects to localhost:9092
$connectionFactory = new RdKafkaConnectionFactory();

// same as above
$connectionFactory = new RdKafkaConnectionFactory('kafka:');

// same as above
$connectionFactory = new RdKafkaConnectionFactory([]);

// connect to Kafka broker at plus custom options
$connectionFactory = new RdKafkaConnectionFactory([
    'global' => [
        '' => uniqid('', true),
        '' => '',
        '' => 'false',
    'topic' => [
        'auto.offset.reset' => 'beginning',

$context = $connectionFactory->createContext();

// if you have enqueue/enqueue library installed you can use a factory to build context from DSN
$context = (new \Enqueue\ConnectionFactoryFactory())->create('kafka:')->createContext();

Send message to topic

/** @var \Enqueue\RdKafka\RdKafkaContext $context */

$message = $context->createMessage('Hello world!');

$fooTopic = $context->createTopic('foo');

$context->createProducer()->send($fooTopic, $message);

Send message to queue

/** @var \Enqueue\RdKafka\RdKafkaContext $context */

$message = $context->createMessage('Hello world!');

$fooQueue = $context->createQueue('foo');

$context->createProducer()->send($fooQueue, $message);

Consume message:

/** @var \Enqueue\RdKafka\RdKafkaContext $context */

$fooQueue = $context->createQueue('foo');

$consumer = $context->createConsumer($fooQueue);

// Enable async commit to gain better performance (true by default since version 0.9.9).

$message = $consumer->receive();

// process a message

// $consumer->reject($message);

Serialize message

By default the transport serializes messages to json format but you might want to use another format such as Apache Avro. For that you have to implement Serializer interface and set it to the context, producer or consumer. If a serializer set to context it will be injected to all consumers and producers created by the context.

use Enqueue\RdKafka\Serializer;
use Enqueue\RdKafka\RdKafkaMessage;

class FooSerializer implements Serializer
    public function toMessage($string) {}

    public function toString(RdKafkaMessage $message) {}

/** @var \Enqueue\RdKafka\RdKafkaContext $context */

$context->setSerializer(new FooSerializer());

Change offset

By default consumers starts from the beginning of the topic and updates the offset while you are processing messages. There is an ability to change the current offset.

/** @var \Enqueue\RdKafka\RdKafkaContext $context */

$fooQueue = $context->createQueue('foo');

$consumer = $context->createConsumer($fooQueue);

$message = $consumer->receive(2000);

Usage with Symfony bundle

Set your enqueue to use rdkafka as your transport

# app/config/config.yml

        transport: "rdkafka:"
        client: ~

You can also you extended configuration to pass additional options, if you don’t want to pass them via DSN string or need to pass specific options. Since rdkafka uses librdkafka (being basically a wrapper around it) most configuration options are identical to those found at

# app/config/config.yml

            dsn: "rdkafka://"
                ### Make sure this is unique for each application / consumer group and does not change
                ### Otherwise, Kafka won't be able to track your last offset and will always start according to
                ### `auto.offset.reset` setting.
                ### See Kafka documentation regarding `` property if you want to know more
                auto.offset.reset: beginning
            ### Commit async is true by default since version 0.9.9.
            ### It is suggested to set it to true in earlier versions since otherwise consumers become extremely slow,
            ### waiting for offset to be stored on Kafka before continuing.
            commit_async: true
        client: ~

back to index