News • Apr 24, 2026

Your Visiting Media Data, Now Conversational. Here’s How It Works.

avatar Chelsea Mullin

Our first MCP integration is read-only on purpose. Here’s what that means for partners and customers building on the Visiting Media platform.

Author: Eric Sniff, CTO, Visiting Media

There’s a friction point that anyone on our platform has hit. You’re prepping for a call, drafting a report or comparing engagement across your portfolio and you need a real answer about a property. Tour traffic last quarter. Which experiences are trending. How a media library is actually performing. So you break out of the conversation, open the platform, click around, find the number, and paste it back.

We just plugged that hole.

Starting today, you can connect your Chatbots directly to your Visiting Media account through a hosted MCP server. Ask questions about your properties, experiences, media, collections, users, and analytics in plain English. The chatbot reads the answer from your live data; grounded, current, and scoped to exactly what your API key is already allowed to see.

Why MCP, and why hosted

MCP — the Model Context Protocol — is the standard the industry is converging on. Tools like Claude Desktop, OpenAI, and a growing list of other assistants already support it. If you’re a partner building integrations, or a customer whose team is already working inside one of these tools, you don’t need us to build a custom adapter for every agent on the market. Build against the spec once, and you’re done.

We went hosted-first because it gets the integration in front of every customer and partner immediately, and lets us keep improving the backend without asking anyone to upgrade anything. And it meets customers where they already are, inside the assistant they’re already using.

A local-only / self-host option isn’t in v1. If you’re a partner whose customers need that posture, talk to us — we’re tracking demand.

What v1 covers

Read-only across: properties (full object, status and attributes), experiences (tours, subsets, custom collections), media (image and video assets, metadata, relationships), collections (how assets are grouped), users (visible to your key’s scope), and analytics (engagement, trends, time-series data).

The same data your dashboards surface today. If the assistant can’t answer something from that surface, it’ll say so. It doesn’t invent a number.

On shipping read-only first

The first question partners are going to ask is why we didn’t ship writes.

Because we want to earn it. Giving an agent read access to data a customer has already exposed to their own dashboards is a low-risk move. Giving an agent the ability to change tags, modify media, or take action on a customer’s behalf is a much bigger step. The right time for that step is after the read experience has been in customers’ hands long enough that we know exactly where the rough edges are.

So v1 is deliberately: read, learn, trust. Write actions are next. A “what can I do?” permission helper — so the AI tool knows your specific scopes upfront and proactively suggests good questions is on the roadmap right behind it. A one-click sign-in flow that replaces the API-key-paste setup is also coming.

Partners will be in early on all of it.

How it works

Setup is about two minutes: paste a connection link into your AI tool, paste an existing VMP API key when prompted, and start asking questions.

A few things worth knowing if you’re technical: the API key is required on every request and never logged, never echoed in error messages, never stored on our side. The AI tool’s view of the data is exactly the scope of the key. If a key is scoped to one property group inside a management company, that’s all the AI assistant can read — nothing expands at the agent layer. The server speaks standard MCP, so any spec-compliant client can talk to it. And every response is backed by a live API call, no caching layer between the assistant and your actual numbers.

The MCP server is a thin translation layer on top of a stable API. When we add new endpoints to the platform, they land in the MCP surface without a rewrite. 🙂

What it actually unlocks

The real test of any integration is whether it solves something people were stuck on. A few patterns from internal testing:

  • The mid-call pivot. Your sales manager is prepping for a meeting. The call takes an unexpected turn and they need live property analytics right now. Previously that meant breaking flow, navigating the platform and adjusting reporting in real time. Now it’s s easy as asking a question.
  • The labor-intensive comparison. Cross-property performance comparisons technically exist in dashboards. They also require enough clicking that should be spent on other important tasks. Asking your assistant to compare the top three properties in a region this quarter takes about ten seconds.
  • The partner workflow. If you’re building a broader agent — a proposal tool, a meeting assistant, an ops workflow — VMP data now drops in without custom connector work. Build to the spec, inherit the integration.
    The assistant won’t fill gaps with guesses when the data isn’t there. That’s what makes it usable for real work rather than just a demo.

What’s next

This is the first in a recurring series we’re kicking off — shipping hard, interesting problems in public. Building AI answer bars, building support for agentic harnesses, and semantic search… We’ll write them up as they ship.

The platform is now a place your AI Assistance can read from. The dashboards aren’t going anywhere — this is a second door into the same house, one that will allow your teams to move faster and more efficiently.

Read, learn, trust. Then we let you act.

Contact our team to learn more about what this unlocks for you today.

News • Apr 24, 2026

Your Visiting Media Data, Now Conversational. Here’s How It Works.

avatar Chelsea Mullin

Our first MCP integration is read-only on purpose. Here’s what that means for partners and customers building on the Visiting Media platform.

Author: Eric Sniff, CTO, Visiting Media

There’s a friction point that anyone on our platform has hit. You’re prepping for a call, drafting a report or comparing engagement across your portfolio and you need a real answer about a property. Tour traffic last quarter. Which experiences are trending. How a media library is actually performing. So you break out of the conversation, open the platform, click around, find the number, and paste it back.

We just plugged that hole.

Starting today, you can connect your Chatbots directly to your Visiting Media account through a hosted MCP server. Ask questions about your properties, experiences, media, collections, users, and analytics in plain English. The chatbot reads the answer from your live data; grounded, current, and scoped to exactly what your API key is already allowed to see.

Why MCP, and why hosted

MCP — the Model Context Protocol — is the standard the industry is converging on. Tools like Claude Desktop, OpenAI, and a growing list of other assistants already support it. If you’re a partner building integrations, or a customer whose team is already working inside one of these tools, you don’t need us to build a custom adapter for every agent on the market. Build against the spec once, and you’re done.

We went hosted-first because it gets the integration in front of every customer and partner immediately, and lets us keep improving the backend without asking anyone to upgrade anything. And it meets customers where they already are, inside the assistant they’re already using.

A local-only / self-host option isn’t in v1. If you’re a partner whose customers need that posture, talk to us — we’re tracking demand.

What v1 covers

Read-only across: properties (full object, status and attributes), experiences (tours, subsets, custom collections), media (image and video assets, metadata, relationships), collections (how assets are grouped), users (visible to your key’s scope), and analytics (engagement, trends, time-series data).

The same data your dashboards surface today. If the assistant can’t answer something from that surface, it’ll say so. It doesn’t invent a number.

On shipping read-only first

The first question partners are going to ask is why we didn’t ship writes.

Because we want to earn it. Giving an agent read access to data a customer has already exposed to their own dashboards is a low-risk move. Giving an agent the ability to change tags, modify media, or take action on a customer’s behalf is a much bigger step. The right time for that step is after the read experience has been in customers’ hands long enough that we know exactly where the rough edges are.

So v1 is deliberately: read, learn, trust. Write actions are next. A “what can I do?” permission helper — so the AI tool knows your specific scopes upfront and proactively suggests good questions is on the roadmap right behind it. A one-click sign-in flow that replaces the API-key-paste setup is also coming.

Partners will be in early on all of it.

How it works

Setup is about two minutes: paste a connection link into your AI tool, paste an existing VMP API key when prompted, and start asking questions.

A few things worth knowing if you’re technical: the API key is required on every request and never logged, never echoed in error messages, never stored on our side. The AI tool’s view of the data is exactly the scope of the key. If a key is scoped to one property group inside a management company, that’s all the AI assistant can read — nothing expands at the agent layer. The server speaks standard MCP, so any spec-compliant client can talk to it. And every response is backed by a live API call, no caching layer between the assistant and your actual numbers.

The MCP server is a thin translation layer on top of a stable API. When we add new endpoints to the platform, they land in the MCP surface without a rewrite. 🙂

What it actually unlocks

The real test of any integration is whether it solves something people were stuck on. A few patterns from internal testing:

  • The mid-call pivot. Your sales manager is prepping for a meeting. The call takes an unexpected turn and they need live property analytics right now. Previously that meant breaking flow, navigating the platform and adjusting reporting in real time. Now it’s s easy as asking a question.
  • The labor-intensive comparison. Cross-property performance comparisons technically exist in dashboards. They also require enough clicking that should be spent on other important tasks. Asking your assistant to compare the top three properties in a region this quarter takes about ten seconds.
  • The partner workflow. If you’re building a broader agent — a proposal tool, a meeting assistant, an ops workflow — VMP data now drops in without custom connector work. Build to the spec, inherit the integration.
    The assistant won’t fill gaps with guesses when the data isn’t there. That’s what makes it usable for real work rather than just a demo.

What’s next

This is the first in a recurring series we’re kicking off — shipping hard, interesting problems in public. Building AI answer bars, building support for agentic harnesses, and semantic search… We’ll write them up as they ship.

The platform is now a place your AI Assistance can read from. The dashboards aren’t going anywhere — this is a second door into the same house, one that will allow your teams to move faster and more efficiently.

Read, learn, trust. Then we let you act.

Contact our team to learn more about what this unlocks for you today.