Hacker Newsnew | past | comments | ask | show | jobs | submit | germanosin's commentslogin

We brought Kafbat UI back to life — a clean, modern UI for Apache Kafka (kafka-ui), originally built by the core team that created the widely used Kafka tools you may have seen before.

Key features:

Built by the original maintainers

Instant cluster introspection and topic browsing

Zero-config local startup (Docker support included)

Actively maintained & open source

GitHub: https://github.com/kafbat/kafka-ui

We’d love your feedback and contributions — let’s build the Kafka UI we all wanted back.


This is truly the missing piece for any streaming infrastructure, bringing much-needed observability to all Kafka users!


Thanks for your warm words!


You can find source code of our platform here: https://github.com/opendatadiscovery/odd-platform

We are 100% opensource, not only architecture but also implementation


The comment was for the GP who posted the blog and paper link. It was not for open data discovery, sorry if there was any confusion. Excellent work at Open Data Discovery. I intend to try out this software.


Hi Cilvic, This is an opensource product. You could use it for free. If you have any questions or need any assistance we would be happy to help you and the same time we hope you'll help our product with your feedback and real-world use cases.


Thanks, in the youtube video you mention ODD v4 with "enterprise features". Do you plan to release paid enterprise features?


No, this was a role based access control and enterprise databases support.


You can find detailed documentation here https://docs.opendatadiscovery.org/. We would love to hear any feedback or questions you may have about our product! Here is a link to our Slack community https://go.opendatadiscovery.org/slack


I did see that and was disappointed. It described the features in full but I was looking to see the features set up to see if it is something I can easily integrate or not. Don't get me wrong love the idea and its young I don't fault the team but I'm not going to get a docker running just to find out i have to reformat all my tests or write way too many new connections to enable any of these features.


I see your point. As you mentioned the product is rather young and we continue to develop it. I agree that documentation is one of the fundamental parts. Thank you for your input, we will find a way to make the documentation more straight forward and useful for cases such as you've described.

Perhaps you would be interested in a call with us where we can answer all your questions including integration with your infrastructure, provide help configuring the platform if needed, etc?


Its not confusing its well written I just want more, but I get it's a community project and it takes time to do all this stuff for free. I'm just ~not so~patiently waiting for docs more around the person looking at set up vs the conceptual


Well, for setting the platform's features up there's this page: https://docs.opendatadiscovery.org/configuration-and-deploym...

It explains how to set up a platform in a way that certain features were enabled/disabled. Maybe this is something you will find useful in a way.

It'd be great if you could provide an example, if I got you wrong


Well like adding a test from panadas/great expectations, software looks great when its set up and already added. So trying to add one myself I just have to imagine, so great expectations make sense probably an API hook set up. But I'm using panadas mostly so how do I add one? Is it going to be more work to use existing tests? If so how much? Really I'm trying to timeline how long setting up everything from my pipeline inside the product as well. Since I want a junior doing alot of this aswell is this going to be weirdly hard for a math uni grad? You know?

The online demo I didn't see abilities to do that on that type of account, which idc just part of gauging how long it would take to go from 0 to running to useful for the company.

Thanks for the help I'm going to keep an eye out for the product in the future for sure. It’s just at the point it is still more work for me to see if I want to do the work to use it


Gotcha!

Thank you for the input, we are going to work on this.


Hi, we're the team behind this product! We updated our demo to include social logins so that spam doesn't get through. There are no logs being collected, and we're not selling this information either - if you don’t want the online version of it just head over https://github.com/opendatadiscovery/odd-platform/tree/main/... locally using docker-compose


Thanks folks. How do i contribute to the project?


Thank you!

We have a lot of repositories on GitHub, please feel free to pick any issue from the list. Do not hesitate to ask us anything in GitHub issues' threads or in our Slack community. I'll provide links for your convinience

1. ODD Platform GitHub: https://github.com/opendatadiscovery/odd-platform

2. Slack Community: https://go.opendatadiscovery.org/slack

3. Documentation with information on how to contribute: https://docs.opendatadiscovery.org/developer-guides/how-to-c...


UI for Apache also provides full support for Kafka Connect



  Hi, db3pt0
  Thanks for detailed feedback!

  - The CleanupPolicy
  fixed issue 925 (https://github.com/provectus/kafka-ui/issues/925)

  - When viewing Cluster -> Topic -> Consumers, it seems like far more is being loaded than just consumer groups for that particular topic. 
  It takes much longer to load than, e.g., Kowl does for pulling the same information
  There is only one way to filter consumer groups by topic, get all consumer groups, enrich it with current members, committed offsets, and then filter them by topic.
  If you have a lot of consumer groups, this might take time to do it. We are thinking on this issue, and will try to improve it in the next versions.
  (https://github.com/provectus/kafka-ui/issues/927)

  - Similar feedback for when clicking on an individual consumer group (example URL path: "/ui/clusters/dev/consumer-groups/my-group"). It takes a very long time to load
 This looks rather strange. For single consumer we are getting consumer group description and then enrich it with topic info (end offsets). This works fast enough.

  - If any permissions issues are encountered while loading the individual consumer group, the entire request fails. That wouldn't necessarily be an issue, but it is when you're loading all consumer groups, and not just the one requested
  Thx for describing this. We created issue for this. (https://github.com/provectus/kafka-ui/issues/928)


  - I don't see any docs on how to access a topic that is secured with certificates (Kowl's relevant config [2])
  You have to pass ssl config stores to kafka client (we'll add docs. in the next versions https://github.com/provectus/kafka-ui/issues/929)

  KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION = /var/private/ssl/kafka.server.keystore.jks
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD = test1234
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEY_PASSWORD = test1234
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION = /var/private/ssl/kafka.server.truststore.jks
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD = test1234

  - If a schema registry uses a self-signed certificate (or one signed by a corporation's CA), there's no way to pass the certificate through a config or skip validation (Kowl's relevant config [3]). As it is, you get a 500 error in the API call, but the UI doesn't show an error

  Same as above (https://github.com/provectus/kafka-ui/issues/930)
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION = /var/private/ssl/kafka.server.truststore.jks
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD = test1234


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: