How it works: - Voice input via Google Cloud Speech-to-Text (no typing required) - Gemini 2.0 Flash parses natural language into structured intents - Vector embeddings (text-multilingual-embedding-002) stored in PostgreSQL with pgvector - Semantic matching with cosine similarity + location proximity bonus - Two-sided: seekers matched with providers, and vice versa
Tech stack: Next.js 16, TypeORM + PostgreSQL/pgvector, Google Vertex AI, AWS SES
Try it: No sign-up required to create an intent. Just click the microphone and describe what you're looking for.
Solo developer in Austin, TX. Would love feedback on the matching quality and voice UX.
No comments yet.