ApertureDB Multimodal AI Workflows - Automate common AI tasks for multimodal data
How do you easily generate embeddings, detect objects, infer new attributes, or query your multimodal data? Stop wrestling with your datasets - use ApertureDB Multimodal AI workflows instead! Ingest or enrich complex datasets, run Jupyter notebooks, and more.
Replies
If you want to see a live demo of how you can use workflows, do join us for our lunch & learn in the morning at 9am PT
https://lu.ma/vnabtolp
Success.ai
@vishakha_gupta4 the prebuilt workflows are great, can teams create their own workflows, or is customization limited? Any API support for integrating with existing ML pipelines?
@vishakha_gupta4 @hamza_afzal_butt The best part is that what the workflows do is open source. While the workflows on the cloud UI are a subset of possibilities, this repository has the all the detailed workings of workflows under the hood.
With this repository as a reference guide, following are the possibilities:
You may refer to what those scripts are doing to get a blue print for building your own workflow.
You may submit a PR. A PR for any custom workflow would be highly encouraged. TIA.
If it is a general enough workflow, it would eventually get published on the cloud UI too!
@hamza_afzal_butt do join in the lunch & learn happening now - it's one of the things Luis can answer showing how to from the repo as Gautam described : https://lu.ma/vnabtolp
LangDrive
This is a game-changer for AI developers! Congrats on the launch @ApertureDB
@michael_vandi thanks a lot. We are happy to be working with you all!
This is the hidden missing piece in SO MANY ML workloads. Great work by the ApetureDB team!
Thank you @aronchick we look forward to our collaborative examples coming in the near future to demonstrate how everyone can use these end to end even starting from edge to query
Hi Vishakha – How does ApertureDB compare to alternatives in terms of read/write speed and query performance on both small and large datasets? Additionally, does it have any unique optimizations or "special sauce" for faster token processing?
@mceoin great question - we have some recent benchmarking results summarized here: https://docs.aperturedata.io/category/benchmarks--comparisons
Mainly, for vector search, we are anywhere between 2-10X faster in terms of KNN throughput and offer sub-10msec latencies on service side. For graph search, our prior evaluations against Neo4j put us sometimes over 30X faster. Mainly, ApertureDB continues to scale for very large workloads (Billion scale graphs so far and 10s and millions of embeddings per search space). We have optimizations when we load data - so far we have tested it more on parallel load of large number of blobs or images - we can extend that to faster token processing though we are yet to test it.
Love this! Super useful for devs. Congrats on the launch!
@mahima_manik thank you for your support. Looking forward to integrating this with Datahawk!