Saturday 30 March 2019

Apache Kafka CCDAK Exam Notes

Hi Readers,

click here for updated kafka notes

If you are planning or preparing for Apache Kafka Certification then this is the right place for you.There are many Apache Kafka Certifications are available in the market but CCDAK (Confluent Certified Developer for Apache Kafka) is the most known certification as Kafka is now maintained by Confluent.


Confluent has introduced CCOAK certification recently. CCOAK is mainly for devOps engineer focusing on build and manage Kafka cluster. CCDAK is mainly for developers and Solution architects focusing on design, producer and consumer. If you are still not sure, I recommend to go for CCDAK as it is more comprehensive exam as compared to CCOAK.


From here onward, we will talk about how to prepare for CCDAK.


I have recently cracked CCDAK and would suggest that prepare well for the exam. This exam verify your theoretical as well as practical understanding of Kafka. You need to answer 60 questions in 90 minutes from your laptop under the supervision of online proctor. There is no mention of number of questions need to be correct in order to pass the exam. They just tell you pass or fail at the end of exam. At least 40-50 hours of preparation is required.


I have prepared for CCDAK using following:



You should prepare well in following areas:
  1. Kafka Architecture
    • Read Confluent Kafka Definitive Guide PDF
    • Read Apache Kafka Documentation
    • Once you read all these, revise using KAFKA THEORY section in this blog. You can expect most of the questions from these notes.
  2. Kafka Java APIs
    • Read Apache Kafka Documentation how to create producer and consumer in Java
  3. Kafka CLI
    • Read Confluent Kafka Definitive Guide PDF
    • Once you read all these, revise using KAFKA CLI section in this blog. You can expect most of the questions from these notes.
  4. Kafka Streams
    • Read Confluent Kafka Definitive Guide PDF
  5. Kafka Monitoring (Metrics)
    • Read Confluent Kafka Definitive Guide PDF
    • Read Apache Kafka Documentation for important metrics
    • Read Confluent Kafka Documentation as well
  6. Kafka Security
    • Read Apache Kafka Documentation
  7. Confluent KSQL
    • Read about KSQL from Confluent Documentation
  8. Confluent REST Proxy
    • Read about KSQL from Confluent Documentation
  9. Confluent Schema Registry

Questions from CCDAK Exam

  1. Kafka Theory
    • Kafka is a .... ? pub-sub system
    • Mostly Kafka is written in which language? Scala
  2. Kafka Streams (Read Kafka Streams notes to get answers of below questions)
    • Which of the Kafka Stream operators are stateful ?
    • Which of the Kafka Stream operators are stateless ?
    • Which window is not having gap ?
  3. Confluent Schema Registry (Read Confluent Schema Registry notes to get answers of below questions)
    • Which of the following is not a primitive type of Avro ?
    • Which of the following in not a complex type of Avro?
    • Which of the following is not a required field in Avro Schema?
    • Delete a field without default value in Avro schema is ...... compatibility? 

KAFKA THEORY

1. Cluster
2. Rack
3. Broker
  • Every broker in Kafka is a "bootstrap server" which knows about all brokers, topics and partitions (metadata) that means Kafka client (e.g. producer,consumer etc) only need to connect to one broker in order to connect to entire cluster.
  • At all times, only one broker should be the controller, and one broker must always be the controller in the cluster
4. Topic
  • Kafka takes bytes as input without even loading them into memory (that's called zero copy)
  • Brokers have defaults for all the topic configuration parameters    
5. Partition
  • Topic can have one or more partition.
  • It is not possible to delete a partition of topic once created.
  • Order is guaranteed within the partition and once data is written into partition, its immutable!
  • If producer writes at 1 GB/sec and consumer consumes at 250MB/sec then requires 4 partition!
6. Segment
  • Partitions are made of segments (.log files)
  • At a time only one segment is active in a partition
  • log.segment.bytes=1 GB (default) Max size of a single segment in bytes
  • log.segment.ms=1 week (default) Time kafka will wait before closing the segment if not full
  • Segment come with two indexes (files)
    • An offset to position index (.index file): Allows kafka where to read to find a message
    • A timestamp to offset index (.timeindex file): Allows kafka to find a message with a timestamp
  • log.cleanup.policy=delete (Kafka default for all user topics) Delete data based on age of data (default is 1 week)
  • log.cleanup.policy=compact Delete based on keys of your messages. Will delete old duplicate keys after the active segment is committed. (Kafka default  for topic __consumer_offsets)
  • Log cleanup happen on partition segments. Smaller/more segments means the log cleanup will happen more often!
  • The cleaner checks for work every 15 seconds (log.cleaner.backoff.ms) 
  • log.retention.hours= 1 week (default) number of hours to keep data for
  • log.retention.bytes = -1 (infinite default) max size in bytes for each partition
  • Old segments will be deleted based on log.retention.hours or log.retention.bytes rule
  • The offset of message is immutable.
  • Deleted records can still be seen by consumers for a period of delete.retention.ms=24 hours (default)
7. Offset
  • Partition is having its own offset starting from 0.
8. Topic Replication
  • Replication factor = 3 and partition = 2 means there will be total 6 partition distributed across Kafka cluster. Each partition will be having 1 leader and 2 ISR (in-sync replica).
  • Broker contains leader partition called leader of that partition and only leader can receive and serve data for partition.
  • Replication factor can not be greater then number of broker in the kafka cluster. If topic is having a replication factor of 3 then each partition will live on 3 different brokers.
9. Producer
  • Automatically recover from errors: LEADER_NOT_AVAILABLE, NOT_LEADER_FOR_PARTITION, REBALANCE_IN_PROGRESS
  • Non retriable errors: MESSAGE_TOO_LARGE
  • When produce to a topic which doesn't exist and auto.create.topic.enable=true then kafka creates the topic automatically with the broker/topic settings num.partition and default.replication.factor
10. Producer Acknowledgment
  • acks=0: Producer do not wait for ack (possible data loss)
  • acks=1: Producer wait for leader ack (limited data loss)
  • acks=all: Producer wait for leader+replica ack (no data loss)
acks=all must be used in conjunction with min.insync.replicas (can be set at broker or topic level)
min.insync.replica=2 implies that at least 2 brokers that are ISR(including leader) must acknowledge
e.g. replication.factor=3, min.insync.replicas=2,acks=all can only tolerate 1 broker going down, otherwise the producer will receive an exception NOT_ENOUGH_REPLICAS on send

11. Safe Producer Config
  • min.insync.replicas=2 (set at broker or topic level)
  • retries=MAX_INT: number of reties by producer in case of transient failure/exception. (default is 0)
  • max.in.flight.per.connection number=5: number of producer request can be made in parellel (default is 5)
  • acks=all
  • enable.idempotence=true: producer send producerId with each message to identify for duplicate msg at kafka end. When kafka receives duplicate message with same producerId which it already committed. It do not commit it again and send ack to producer (default is false)
12. High Throughput Producer using compression and batching
  • compression.type=snappy: value can be none(default), gzip, lz4, snappy. Compression is enabled at the producer level and doesn't require any config change in broker or consumer Compression is more effective in case of bigger batch of messages being sent to kafka
  • linger.ms=20:Number of millisecond a producer is willing to wait before sending a batch out. (default 0). Increase linger.ms value increase the chance of batching.
  • batch.size=32KB or 64KB: Maximum number of bytes that will be included in a batch (default 16KB). Any message bigger than the batch size will not be batched
10. Message Key
  • Producer can choose to send a key with message. 
  • If key= null, data is send in round robin
  • If key is sent, then all message for that key will always go to same partition. This can be used to order the messages for a specific key since order is guaranteed in same partition.
  • Adding a partition to the topic will loose the guarantee of same key go to same partition.
  • Keys are hashed using "murmur2" algorithm by default.
13. Consumer
  • Per thread one consumer is the rule. Consumer must not be multi threaded.
  • records-lag-max (monitoring metrics) The maximum lag in terms of number of records for any partition in this window. An increasing value over time is your best indication that the consumer group is not keeping up with the producers.
14. Consumer Group

15. Consumer Offset
  • When consumer in a group has processed the data received from Kafka, it commits the offset in Kafka topic named _consumer_offset which is used when a consumer dies, it will be able to read back from where it left off.
14. Delivery Semantics
  • At most once : Offset are committed as soon as message batch is received. If the processing goes wrong, the message will be lost (it won't be read again)
  • At least once (default): Offset are committed after the message is processed.If the processing goes wrong, the message will be read again. This can result in duplicate processing of message. Make sure your processing is idempotent. (i.e. re-processing the message won't impact your systems). For most of the application, we use this and ensure processing are idempotent.
  • Exactly once: Can only be achieved for Kafka=>Kafka workflows using Kafka Streams API. For Kafka=>Sink workflows, use an idempotent consumer.
16. Consumer Offset commit strategy
  • enable.auto.commit=true & synchronous processing of batches: with auto commit, offset will be committed automatically for you at regular interval (auto.commit.interval.ms=5000 by default) every time you call .poll(). If you don't use synchronous processing, you will be in "at most once" behavior because offsets will be committed before your data is processed.
  • enable.auto.commit=false & manual commit of offsets (recommended)
17. Consumer Offset reset behavior
  • auto.offset.reset=latest: will read from the end of the log
  • auto.offset.reset=earliest: will read from the start of the log
  • auto.offset.reset=none: will throw exception of no offset is found
  • Consumer offset can be lost if hasn't read new data in 7 days. This can be controlled by broker setting offset.retention.minutes 
18. Consumer Poll Behavior
  • fetch.min.bytes = 1 (default): Control how much data you want to pull at least on each request. Help improving throughput and decreasing request number. At the cost of latency.
  • max.poll.records = 500 (default): Controls how many records to receive per poll request. Increase if your messages are very small and have a lot of available RAM.
  • max.partition.fetch.bytes = 1MB (default): Maximum data returned by broker per partition. If you read from 100 partition, you will need a lot of memory (RAM)
  • fetch.max.bytes = 50MB (default): Maximum data returned for each fetch request (covers multiple partition). Consumer performs multiple fetches in parallel.  
19. Consumer Heartbeat Thread
  • Heartbeat mechanism is used to detect if consumer application in dead.
  • session.timeout.ms=10 seconds (default) If heartbeat is not sent in 10 second period, the consumer is considered dead. Set lower value to faster consumer rebalances
  • heartbeat.interval.ms=3 seconds (default) Heartbeat is sent in every 3 seconds interval. Usually 1/3rd of session.timeout.ms
20. Consumer Poll Thread
  • Poll mechanism is also used to detect if consumer application is dead.
  • max.poll.interval.ms = 5 minute (default) Max amount of time between two .poll() calls before declaring consumer dead. If processing of message batch takes more time in general in application then should increase the interval.
21. Kafka Guarantees
  • Messages are appended to a topic-partition in the order they are sent
  • Consumer read the messages in the order stored in topic-partition
  • With a replication factor of N, producers and consumers can tolerate upto N-1 brokers being down
  • As long as number of partitions remains constant for a topic ( no new partition), the same key will always go to same partition
22. Client Bi-Directional Compatibility
  • an Older client (1.1) can talk to Newer broker (2.0)
  • a Newer client (2.0) can talk to Older broker (1.1)
23. Kafka Connect
  • Source connect: Get data from common data source to Kafka
  • Sink connect: Publish data from Kafka to common data source
24. Zookeeper
  • ZooKeeper servers will be deployed on multiple nodes. This is called an ensemble. An ensemble is a set of 2n + 1 ZooKeeper servers where n is any number greater than 0. The odd number of servers allows ZooKeeper to perform majority elections for leadership. At any given time, there can be up to n failed servers in an ensemble and the ZooKeeper cluster will keep quorum. If at any time, quorum is lost, the ZooKeeper cluster will go down. 
  • In Zookeeper multi-node configuration, initLimit and syncLimit are used to govern how long following ZooKeeper servers can take to initialize with the current leader and how long they can be out of sync with the leader. 
    • If tickTime=2000, initLimit=5 and syncLimit=2 then a follower can take (tickTime*initLimit) = 10000ms to initialize and may be out of sync for up to (tickTime*syncLimit) = 4000ms
  • In Zookeeper multi-node configuration, The server.* properties set the ensemble membership. The format is server.<myid>=<hostname>:<leaderport>:<electionport>. Some explanation:
    • myid is the server identification number. In this example, there are three servers, so each one will have a different myid with values 1, 2, and 3 respectively. The myid is set by creating a file named myid in the dataDir that contains a single integer in human readable ASCII text. This value must match one of the myid values from the configuration file. If another ensemble member has already been started with a conflicting myid value, an error will be thrown upon startup.
    • leaderport is used by followers to connect to the active leader. This port should be open between all ZooKeeper ensemble members.
    • electionport is used to perform leader elections between ensemble members. This port should be open between all ZooKeeper ensemble members.

KAFKA CLI

1. Start a zookeeper at default port 2181

> bin/zookeeper-server-start.sh config/zookeeper.properties

2. Start a kafka server at default port 9092


> bin/kafka-server-start.sh config/server.properties

3. Create a kafka topic with name my-first-topic

> bin/kafka-topics.sh --zookeeper localhost:2181 --topic my-first-topic --create --replication-factor 1 --partitions 1

4. List all kafka topics

> bin/kafka-topics.sh --zookeeper localhost:2181 --list

5. Describe kafka topic my-first-topic

> bin/kafka-topics.sh --zookeeper localhost:2181 --topic my-first-topic --describe

6. Delete kafka topic my-first-topic

> bin/kafka-topics.sh --zookeeper localhost:2181 --topic my-first-topic --delete
Note: This will have no impact if delete.topic.enable is not set to true


7. Find out all the partitions without a leader

> bin/kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable-partitions

8. Produce messages to Kafka topic my-first-topic

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-first-topic --producer-property acks=all
>hello ashish
>learning kafka
>^C

9. Start Consuming messages from kafka topic my-first-topic

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-topic --from-beginning
>hello ashish
>learning kafka

10. Start Consuming messages in a consumer group from kafka topic my-first-topic

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-first-topic --group my-first-consumer-group --from-beginning

11. List all consumer groups

> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list

12. Describe consumer group

> bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe -group my-first-consumer-group

13. Reset offset of consumer group to replay all messages

 > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe -group my-first-consumer-group --reset-offsets --to-earliest --execute --topic my-first-topic

14. Shift offsets by 2 (forward) as another strategy

> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --group my-first-consumer-group --reset-offsets --shift-by 2 --execute --topic my-first_topic

15. Shift offsets by 2 (backward) as another strategy

> bin/kafka-consumer-groups --bootstrap-server localhost:9092 --group my-first-consumer-group --reset-offsets --shift-by -2 --execute --topic my-first_topic


KAFKA API

  • Click here to find out how we can create a Safe and high throughput Kafka Producer using Java.
  • Click here to find out how we can create a Kafka consumer using Java with manual auto commit enabled.

Default Ports

  • Zookeeper: 2181
  • Zookeeper Leader Port 3888
  • Zookeeper Election Port (Peer port) 2888
  • Broker: 9092
  • REST Proxy: 8082
  • Schema Registry: 8081
  • KSQL: 8088

63 comments:

  1. Hi! This is good stuff. Thanks! As per your experience, what percent of questions were asked from the Udemy's 150 question set?

    ReplyDelete
    Replies
    1. can you please provide the udemy question set link

      Delete
    2. Hi Kumar,

      You can buy question set from Udemy website. Here is the link:-
      https://www.udemy.com/course/confluent-certified-developer-for-apache-kafka/

      PS:- I don't have any affiliation with Udemy. Please buy if you feel like.

      Delete
  2. wow..this is awesome stuff... i was looking for some material to appear for exam... i dont hav exerience but have been reading Confluent Kafka Definitive Guide and Apache Kafka Series - Learn Apache Kafka for Beginners.
    When i came across your blog, i feel this is right materila to guide on what i was looking t pass the exam.
    I have a question, DO i have to read complete Apache Kafka documentation and Confluent documentation or i just have to read sections which you have given above?
    Thanks for your help

    ReplyDelete
    Replies
    1. I recommend to get basic understanding first before relying on this blog. These are like final exam notes which you want to go through one or two days before appearing for exam.
      I would suggest to start with Confluent Kafka Definitive guide. Ready first 5 to 6 sections of that pdf to get basic understanding of Kafka. I strongly recommend to go through Udemy Kafka 150 questions set. It will give you understanding of what kind of question to expect in exam. Feel free to ask more questions.

      Delete
    2. I haven't read your comment fully. When you say you are reading Confluent Kafka Definitive Guide and watching Udemy Apache Kafka Series - you are on the right track for your preparation.

      Delete
    3. Thanks buddy for your response and also for the information you have posted.

      Delete
  3. Hi,

    I am not a Java developer but deployed Enterprsie Confluent kafka for one of the customer. I have been working in Kafka for more than one year. I have involved in all sort of plugins related to Kafka and services. I have deployed Kafka with Kerberose and SASL, SSL/TLS. I have setup Prometheus exporters and setup Grafna Dashboards for Kafka. I have been working with KSQL, Schema Registry, Rest Proxy and Connect services. But I am not a Java developer. I am a cloud specialist and infrastructure engineer for the past 10 years. how it will be suitable for me. This certification will give value to my career. I am comfortable with Kafka internals pretty much clear on kafka end to end except to develop producers/consumers from scratch. But I can understand any issues if developers is facing in consumer or producer to resolve their issues.
    Suggest me how it will be suitable for my career.

    ReplyDelete
    Replies
    1. Sorry for late reply,

      I think you have worked on almost everything on Kafka and you can easily pass this certification course even if you are not a developer. I must say that there are very few kafka certified developers in the market and companies always prefer certified developers over those who just mentioned that they have worked on kafka even if they are really good at it. I am getting more job interview calls from the moment i added this certification in my resume and linkedin profile.

      Delete
  4. Hi,

    I am not a Java developer but deployed Enterprsie Confluent kafka for one of the customer. I have been working in Kafka for more than one year. I have involved in all sort of plugins related to Kafka and services. I have deployed Kafka with Kerberose and SASL, SSL/TLS. I have setup Prometheus exporters and setup Grafna Dashboards for Kafka. I have been working with KSQL, Schema Registry, Rest Proxy and Connect services. But I am not a Java developer. I am a cloud specialist and infrastructure engineer for the past 10 years. how it will be suitable for me. This certification will give value to my career. I am comfortable with Kafka internals pretty much clear on kafka end to end except to develop producers/consumers from scratch. But I can understand any issues if developers is facing in consumer or producer to resolve their issues.
    Suggest me how it will be suitable for my career.

    ReplyDelete
  5. Hi Ashish,

    I think the ports list in the default ports of this section is slightly off. The zookeeper leader election port / leader port is 3888, and the peer/communication port is 2888. You got captured it the other way. Same is the case with section 24. Zookeper.

    References -
    https://www.confluent.io/wp-content/uploads/confluent-kafka-definitive-guide-complete.pdf [PAGES 19,20]
    https://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
    https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/administration/content/zookeeper-ports.html

    ReplyDelete
    Replies
    1. Updated the blogpost with correct ports. Thanks :)

      Delete
  6. Great work capturing all of this!

    ReplyDelete
  7. I have just cracked the CCDAK following most of you recommendations and the theory summary (My background: Programmer + 6 months working experience with Confluent Kafka) in 2 weeks.

    Many thanks!

    ReplyDelete
    Replies
    1. Congratulations on your certification. Keep going !!!

      Delete
  8. Hi Ashish,
    Thanks for your notes.
    Would like to know how much % of questions are related to confluent version of Kafka. I.e schema registry,rest proxy, ksql etc

    ReplyDelete
    Replies
    1. I would say 30%. You have to study confluent version of Kafka as well to pass the exam. I also felt the same why should I study confluent things for Kafka exam but this is the way it is. Good luck !!!

      Delete
  9. Hi Ashish,
    Thanks for your guidance and the notes. I would like to know how many hours of prep is required for kafka certification.
    Also what is the difference between CCDAK and CCOAK

    ReplyDelete
    Replies
    1. Hi Vinny, I believe 40-50 hours of prep required for Kafka certification.
      Confluent has introduced CCOAK certification recently. CCOAK is mainly for devOps engineer focusing on build and manage Kafka cluster. CCDAK is mainly for developers and Solution architects focusing on design, producer and consumer. If you are still not sure, I recommend to go for CCDAK as it is more comprehensive exam as compared to CCOAK

      Delete
    2. Thanks a lot Ashish. That really helps

      Delete
  10. This comment has been removed by a blog administrator.

    ReplyDelete
  11. Your blog post is really crisp and concise. The notes help big time through last minute prep.

    ReplyDelete
    Replies
    1. Thank you very much. Keep learning...Keep going !!!

      Delete
  12. Hi Ashish,

    Thanks for such a detailed post. My question is about how much percent of questions were there on JAVA Producer and Consumer API or to debug the java code or chose the correct JAVA code. Actually I am a python developer and know pykafka python library for Kafka but have no experience on Java or Java Kafka API.
    Will there be lot of questions related to Java.

    Regards
    Vardhan Bhoumik

    ReplyDelete
    Replies
    1. Don't worry there won't be any question to debug java code. Even if there is any Java producer consumer API related question. You will be able to answer that even if you know any other language.

      Delete
    2. Thanks Ashish for your prompt response. :)

      Delete
  13. "Replication factor can not be greater then number of broker in the kafka cluster. If topic is having a replication factor of 3 then each partition will **leave** on 3 different brokers."
    correction
    "Replication factor can not be greater then number of broker in the kafka cluster. If topic is having a replication factor of 3 then each partition will ****live***** on 3 different brokers."

    ReplyDelete
  14. Thanks. Yes will update this blog with more info whenever find something.

    ReplyDelete
  15. commands to create/delete & query metadata of topics now use bootstrap server.
    zookeeper usage in the command is deprecated

    ReplyDelete
    Replies
    1. Thanks. I will check and updated the notes accordingly.

      Delete
  16. Cleared kafka certification yesterday. Heartfelt thanks to you for posting your exam notes. They are proved to be really useful

    ReplyDelete
    Replies
    1. Wow great. Congratulations on your new achievement.

      Delete
  17. Can we expect a questions on KSQL,Joins,Load Rebalancing?
    As here I cant see anything like that in the blog. Also on Kstream,Kafka Connect,State store,internal topics etc etc?

    ReplyDelete
    Replies
    1. Yes Expect questions from each area- KSQL, Joins, Rebalancing, KStreams, Kafka Connect, State Store.

      I have given the link for my other blog pages in "Questions from CCDAK Exam" section in this page.

      Delete
  18. Also, is it absolutely necessary to take the Udemy test?
    or can you share test if you have it as I cannot find any sample test onlie to validate my knowledge.

    ReplyDelete
    Replies
    1. Yes there is no free sample test available online to validate your knowledge.

      Though it is not necessary but I highly recommend to buy udemy test as the questions are on the similar pattern of actual exam.

      Please note that I can not share any test at this moment.

      Delete
  19. Can anyone tell me how much do i need to score in CCDAK, , like how many questions i need to answer correctly ? And if i am comfortable with the Udemy test series , can i expect the same in the CCDAK also ?

    ReplyDelete
    Replies
    1. Sorry for late reply Mukesh. Confluent doesn't mention about passing score for CCDAK. They only tell you pass or fail without giving any score. I assume based on my experience that it will be somewhere around 80% correct answers.
      Moreover if you are comfortable with Udemy test series then you are good to go for exam. Best of luck !!!!

      Delete
  20. REST proxy port is 8083, not 8082 if I'm not wrong.

    ReplyDelete
    Replies
    1. Hi REST proxy is a confluent concept rather then a kafka original concept. Its default port is 8082 as mentioned in confluent documentation. Am in missing something ? Where did you refer ?

      https://docs.confluent.io/current/kafka-rest/index.html

      Delete
  21. Looking at the kafka-topics command now zookeeper is deprecated:

    --zookeeper DEPRECATED, The connection string for
    the zookeeper connection in the form
    host:port. Multiple hosts can be
    given to allow fail-over.
    And you have to use:

    --bootstrap-server to. In case of providing this, a
    direct Zookeeper connection won't be
    required.

    ReplyDelete
  22. Hello Ashish,
    First of all, thank you - just passes the exam, and your notes did help me a lot. In the process of the preparation I did create my own notes, mainly because when I write them, I organize the material and it makes me remember information better. In case you would be OK with sharing them - they are available at http://naftulinconsulting.com/CCDAK_Preparation_Notes.pdf

    ReplyDelete
    Replies
    1. Hi Henry. Thanks for asking. As long as more people get benefited from this, i am okay :)

      Delete
  23. No there is no negative scoring.
    Please follow this updated post here
    https://codingnconcepts.com/post/apache-kafka-ccdak-exam-notes/

    ReplyDelete
  24. Thanks Ashish for guidance. Your blog really helped. Earned CCDAK certificate today. Thanks a lot🙃

    ReplyDelete
    Replies
    1. Good to hear that. Many congratulations on your certification. Keep it up.

      Delete
    2. Nishant, congratulation on clearing the Kafka certification.
      did you have to read complete Kafka definitive guide?
      Also, how useful were the Kafka test series from Udemy?

      Delete
    3. Congratulation Nishant.
      How much question came from Udemy CCDAK dumps(from 150 Questions)?

      Delete
  25. This comment has been removed by the author.

    ReplyDelete
  26. Do anyone have dumps for this exam ?

    ReplyDelete
    Replies
    1. I did not find any dumps for the exam though you can-
      1. Buy Udemy Kafka Sample Questions
      2. Go to my latest blog post on CCDAK where i have added more sample exam questions and also updated notes with more info about exam:-
      https://codingnconcepts.com/post/apache-kafka-ccdak-exam-notes/

      Delete
  27. @ashish lahoti, thanks
    do you recommend anything for CCOAK

    ReplyDelete
    Replies
    1. CCOAK is a new exam with emphasis on devops like Kafka metrics, zookeeper, cluster, CLI related topics. I do not have any concrete information at the moment but keep in mind the core concept of kafka such as architecture, CLI, Streams, API will be part of exam no matter its CCDAK or CCOAK. I will try to make a post for CCOAK and keep you updated.

      Delete
  28. Thanks Ashish for putting this together and the guidance. I just got my CCDAK cert.

    ReplyDelete
    Replies
    1. Wow. Congratulations on your achievement. Keep it up !!

      Delete
    2. There is a lot of work done to collate all the needed stuffs in right order.... Great Job Ashish.

      Delete
    3. Hi Sandy, Tomorrow i am going to appear for the exam.
      Can you share how the questions are framed? How useful was the 150 questions from udemy?

      Delete
  29. Appreciated your efforts for consolidating notes and prep steps. I have completed certification yesterday.

    ReplyDelete

Top CSS Interview Questions

These CSS interview questions are based on my personal interview experience. Likelihood of question being asked in the interview is from to...