{"brand":"CrowLingo","publisher":"Kymata Labs","url":"https://crowlingo.org","contact":"contact@kymatalabs.com","description":"AI-powered animal language processing for the American crow — methods, decoding, ethics, and the practical pipeline from caw to cluster.","keywords":["American crow","Corvus brachyrhynchos","crow vocalization","crow communication","crow cognition","crow tool use","crow face recognition","Marzluff mask experiments","corvid intelligence","corvid behavior","cooperative breeding crows","syrinx","crow syrinx anatomy","how crows make sound","bird two-source vocal production","crow frequency range","crow caw frequency","crow harmonic structure","bioacoustics","computational bioacoustics","deep learning bioacoustics","BirdNET","BirdNET embeddings","Perch model bioacoustics","PaSST audio embedding","PANNs audio","AudioMAE","self-supervised audio","SSL audio model","masked spectrogram prediction","audio foundation model","NatureLM-audio","Earth Species Project","ICLR 2025","BEANS-Zero benchmark","audio-language foundation model","latent space","audio embedding","UMAP","PaCMAP","t-SNE","HDBSCAN","dimensionality reduction audio","vocal map UMAP","crow vocal atlas","graded calls","repertoire mapping","what AI can decode about crow calls","caller identity inference","individual signature crow","crow dialect","group-level acoustic centroid","contextual clustering crow","behavioral context audio","crow syntax","combinatorial crow communication","crow language","field recording crow","crow audio recording rig","Sennheiser ME66 crow","Zoom H1 H5 field recorder","behavior log synchronization","clap sync field audio","48kHz 24-bit audio capture","bandpass filter bioacoustics","preprocessing crow audio","noisereduce librosa","audio embedding extraction","from caw to cluster","crow translation pipeline","crow playback ethics","bioacoustic playback rules","animal welfare audio research","no playback near nests","alarm call playback ethics","six rules listening back","responsible animal communication research","demonstrated vs aspirational ALP","animal language processing 2026","real-time bidirectional crow dialogue","compositional decoding animal","wearable audio logger crows","Demartsev 2026 carrion crow","carrion crow repertoire mapping","is there a crow translator app","can AI talk to crows","do crows have language","do crows have grammar","do crows have dialect","what do crow calls mean","how do AI models understand animal sounds"],"sections":[{"name":"The Crow","type":"Pillar","path":"/the-crow","full_url":"https://crowlingo.org/the-crow","description":"Why the American crow as a model species: cognition, sociality, vocal anatomy, and a repertoire dense enough to warrant a map."},{"name":"Vocal anatomy","type":"Sub-page","path":"/the-crow/vocal-anatomy","full_url":"https://crowlingo.org/the-crow/vocal-anatomy","description":"How a crow makes sound — the syrinx, two independent sound sources, the 200 Hz – 8 kHz frequency window, and how it differs from a human larynx."},{"name":"Repertoire Atlas","type":"Interactive","path":"/the-crow/repertoire-atlas","full_url":"https://crowlingo.org/the-crow/repertoire-atlas","description":"An interactive 2-D map of crow vocalizations. ~800 seeded points across nine clusters; click any point to see cluster context, spectrogram, and behavioral probabilities."},{"name":"Cognition & society","type":"Sub-page","path":"/the-crow/cognition-and-society","full_url":"https://crowlingo.org/the-crow/cognition-and-society","description":"What makes the American crow worth taking seriously as a communicative animal — tool use, face recognition, family-group sociality, intergenerational learning."},{"name":"Methods","type":"Pillar","path":"/methods","full_url":"https://crowlingo.org/methods","description":"The new generation of AI audio methods — self-supervised learning, latent spaces, NatureLM-audio — and what they enable for crows specifically."},{"name":"Self-supervised audio","type":"Sub-page","path":"/methods/self-supervised-audio","full_url":"https://crowlingo.org/methods/self-supervised-audio","description":"How self-supervised learning trains audio models without labels — masked prediction, what the model actually learns, why it works for bioacoustics."},{"name":"Latent space 101","type":"Primer","path":"/methods/latent-space-101","full_url":"https://crowlingo.org/methods/latent-space-101","description":"Embeddings, latent spaces, and dimensionality reduction — the minimum mental model for reading a vocal atlas."},{"name":"NatureLM-audio","type":"Reference","path":"/methods/naturelm-audio","full_url":"https://crowlingo.org/methods/naturelm-audio","description":"Earth Species Project's audio-language foundation model for bioacoustics. ICLR 2025. What it does, what it doesn't, how it changed the workflow."},{"name":"Traditional vs ALP","type":"Sub-page","path":"/methods/traditional-vs-alp","full_url":"https://crowlingo.org/methods/traditional-vs-alp","description":"The fifty-year hand-labeling regime versus the new map-based regime. What the field gained; what it gave up."},{"name":"Decoding","type":"Pillar","path":"/decoding","full_url":"https://crowlingo.org/decoding","description":"What we can now see in crow vocalizations that we couldn't see before — repertoire mapping, contextual clustering, individuality, combinatorial evidence."},{"name":"What we can decode now","type":"Flagship","path":"/decoding/what-we-can-decode-now","full_url":"https://crowlingo.org/decoding/what-we-can-decode-now","description":"The four features a self-supervised model extracts from one half-second of crow voice, and what each tells us — pitch contour, harmonic emphasis, duration, spectral grain."},{"name":"Contextual clustering","type":"Analysis","path":"/decoding/contextual-clustering","full_url":"https://crowlingo.org/decoding/contextual-clustering","description":"How latent coordinates correlate with behavior. The Demartsev 2026 carrion-crow preprint as the cleanest current example."},{"name":"Individuality & dialect","type":"Sub-page","path":"/decoding/individuality-and-dialect","full_url":"https://crowlingo.org/decoding/individuality-and-dialect","description":"Caller identity from harmonic signature, group-level acoustic centroids, and how seriously to take the dialect hypothesis."},{"name":"Combinatorial evidence","type":"Sub-page","path":"/decoding/combinatorial-evidence","full_url":"https://crowlingo.org/decoding/combinatorial-evidence","description":"Sequence-level statistical regularities in crow vocalizations and the open question of crow 'syntax'. Honest about behavioral evidence."},{"name":"Pipeline","type":"Centerpiece","path":"/pipeline","full_url":"https://crowlingo.org/pipeline","description":"Eight stages from a phone recording to an interpretable vocal map: capture, detect, preprocess, embed, project & cluster, contextualize, inspect, respond."},{"name":"Record","type":"Stage 1","path":"/pipeline/record","full_url":"https://crowlingo.org/pipeline/record","description":"Field-recording specifics for crow audio: microphone choice, sample rate, mono vs stereo, behavior-log synchronization, ethical floor."},{"name":"Preprocess","type":"Stage 3","path":"/pipeline/preprocess","full_url":"https://crowlingo.org/pipeline/preprocess","description":"Bandpass, peak-normalize, light spectral denoise. The minimum that helps without distorting what the model needs to read."},{"name":"Embed","type":"Stage 4","path":"/pipeline/embed","full_url":"https://crowlingo.org/pipeline/embed","description":"Pick your encoder honestly: BirdNET embeddings, Perch, CLAP, NatureLM-audio. Each is its own space. Disclose which."},{"name":"Cluster & label","type":"Stages 5–7","path":"/pipeline/cluster-and-label","full_url":"https://crowlingo.org/pipeline/cluster-and-label","description":"Project to 2-D for inspection, cluster on the full embeddings, label clusters by exemplars, join to behavior context."},{"name":"Respond","type":"Stage 8","path":"/pipeline/respond","full_url":"https://crowlingo.org/pipeline/respond","description":"How to run a playback session as data collection, not a stunt: pre-registered protocol, observer, time-bounded, halt on distress."},{"name":"Frontier","type":"Pillar","path":"/frontier","full_url":"https://crowlingo.org/frontier","description":"The honest state of the field: what's demonstrated, what's emerging, what's not yet science. Ethics. Open dataset. How to contribute."},{"name":"Current vs aspirational","type":"Sub-page","path":"/frontier/current-vs-aspirational","full_url":"https://crowlingo.org/frontier/current-vs-aspirational","description":"Demonstrated, emerging, and not-yet-science capabilities in animal-language processing for crows. A clean three-bucket framing."},{"name":"Open dataset","type":"Reference","path":"/frontier/open-dataset","full_url":"https://crowlingo.org/frontier/open-dataset","description":"10k+ labeled crow calls planned for v2 release on Hugging Face, CC-BY-NC. v0 placeholder; honest about the timeline."},{"name":"Contribute","type":"Submission","path":"/frontier/contribute","full_url":"https://crowlingo.org/frontier/contribute","description":"How to record crows well, and how to submit your recordings. v0: email + Google Form. v3 ships the proper upload pipeline."},{"name":"Library","type":"Reading list","path":"/library","full_url":"https://crowlingo.org/library","description":"Reading list for crow vocal communication and animal language processing — papers, books, primary sources, organized by constellation."}],"related_tools":[{"name":"Kymata Labs","url":"https://kymatalabs.com","description":"Parent publishing entity. Editorial intelligence + applied AI projects.","category":"Publisher"}],"trust_and_methodology":[{"name":"Ethics","path":"/frontier/ethics","full_url":"https://crowlingo.org/frontier/ethics"},{"name":"About","path":"/about","full_url":"https://crowlingo.org/about"},{"name":"Privacy","path":"/privacy","full_url":"https://crowlingo.org/privacy"},{"name":"Terms","path":"/terms","full_url":"https://crowlingo.org/terms"},{"name":"Disclaimer","path":"/disclaimer","full_url":"https://crowlingo.org/disclaimer"},{"name":"AI & Developer Access","path":"/ai","full_url":"https://crowlingo.org/ai"}],"machine_readable_endpoints":{"llms_txt":"https://crowlingo.org/llms.txt","llms_full_txt":"https://crowlingo.org/llms-full.txt","ai_json":"https://crowlingo.org/ai.json","ai_json_well_known":"https://crowlingo.org/.well-known/ai.json","ai_plugin_json":"https://crowlingo.org/.well-known/ai-plugin.json","openapi_yaml":"https://crowlingo.org/.well-known/openapi.yaml","sitemap_xml":"https://crowlingo.org/sitemap.xml","feed_json":"https://crowlingo.org/feed.json","schema_feed_json":"https://crowlingo.org/schema-feed.json","nlweb_ask":"https://crowlingo.org/api/ask","nlweb_mcp":"https://crowlingo.org/api/mcp"},"updated_at":"2026-05-17T04:05:56.547Z"}