CrayEye

CrayEye

Multimodal multitool for vision prompts using real-time data

5 followers

Craft & share multimodal LLM vision prompts infused with real-world context from device sensors & APIs. Free, open-source, & written by A.I.

CrayEye gallery image
CrayEye gallery image
CrayEye gallery image
CrayEye gallery image
CrayEye gallery image
Free
Launch Team / Built With

What do you think? …

Alexandria Redmon
I made CrayEye to help better explore the capabilities of frontier multimodal vision LLMs particularly when augmented with real-time device data and APIs. CrayEye allows you to craft and share prompts to be executed by the model of your choice with optional interpolations for values including the current device's location or the weather forecast for that location. The application is cross-platform (iOS and Android), open-source, and was written entirely via LLM itself. Helpful default prompts include: 📆 Add to Calendar Create a list of tappable Google Calendar links for any event details detected. 😌 Will this be comfortable? Estimate how comfortable an outfit will be given the current local weather forecast. 🎨 Art guide Interpret a piece of art and provide a detailed analysis of it and its influences. 👷‍♀️ Who made this? Assessment of who made the thing(s) in the image using current location for context. 🐦 What kind of bird is this? Identification of bird(s) identified using current location for context. 🪻 Name that plant Comprehensive rundown of plant(s) identified using current location for context. 🔬 What's this made of? Detailed analysis of the material composition of item(s). ⚖️ Calorie counter Estimate the itemized and total calories in any visible food item(s).
CrayEye gallery image
CrayEye gallery image
CrayEye gallery image

Do you use CrayEye?

Review CrayEye?