Businesses are managing more content on a daily basis than ever before. From Marketing videos to onboarding tutorials to video conference calls – the volume and velocity of content creation has never been greater. Processing, organizing, and leveraging the inherent intelligence in that content can feel overwhelming and insurmountable, especially when done manually. But AI can make sense of it all.
Watch this session with Alex Baker, Senior Direct of Enterprise Sales at AnyClip, as he discusses the following with Brian Munz, Product Manager at expert.ai:
- Leveraging AI to convert content into data – making video searchable, discoverable, personal, interactive and collaborative
- Compelling examples of how enterprise companies utilize these platforms for multi-language translation
- Combining textual NLU from expert.ai with AnyClip’s Visual Intelligence Technology for video
Transcript:
Brian Munz:
Hey everybody, and welcome to the NLP Stream, which is our regular, if not weekly, if not biweekly live stream about all things NLP. And what we do is every week we like to have people within the world of NLP talk about just what’s going on, different things, different aspects, and ways that NLP is used in the real world, and they can range from conceptual to technical and real world.
And so today I’m actually really excited because this is an area which comes up quite a bit at Expert, which is around video and the extraction of text from video, and then of course, what is done with that text. And so I’m really interested to hear what they say. This is a company that we are collaborating with right now.
And so without further ado, I’d like to get started and introduce Alex Baker, who is the senior director of enterprise sales. And so without further ado, I’ll let him introduce everything else.
Alex Baker:
Great. Thank you, Brian. Thank you very much for having me today. As Brian said, we’ll be going through the rise of smart video, which is unlocking the power of video with AI. My name is Alex Baker. I’m the senior director of enterprise sales at AnyClip. And let’s get started.
So I’ll quickly introduce you to AnyClip. We are the visual intelligence company and we take video content, analyze it, transform that into data, and then create exceptional video experiences for clients, customers, and ultimately the end users.
So I want to first talk about video. Video is dynamic because if you think about it, it’s moving images, it’s spoken word and language, which we’ll talk a lot about today, and then it’s text. It’s those three things that make up video content.
And it’s also very personal. You’re on camera. Every move, every change that you do is being recorded and then it’s documented sometimes for a short period of time, sometimes for a long time or really forever. So for businesses, video really is the most dynamic and really is the most personal form of communication that we have today.
Now let’s talk about language. Language is also very dynamic. Everyone has their own native language and everyone has their own experience with language. It’s also very, very dynamic. So every word, every phrase, every sentence is really an opportunity to convey a message.
So for businesses, what we want to do and what most of our clients want to do is to meet people where they are, meet them on their terms and in their own language. And if you’re able to do that, it’s really an immense opportunity to connect with them on a personal level. So let’s keep that in mind throughout the presentation and then also the questions afterwards, because that’s really the core of what we’re talking about today.
So for visual intelligence, in order to do that, in order to create that personal connection when you’re talking about dynamic content, what we do is transform latent video content into smart video. So what exactly does that mean?
I think about it as going from passive to active and taking that, not only that content, but that information and really that whole experience and making it into something that you’re proud of, that represents your brand, that represents your product or your service or your organization, and that you’re excited to show it to the world.
But then most importantly, it has to drive business results. There has to be an ROI, and there has to be either saving money or making money or making someone’s life easier. That’s really what we’re all about.
So I’d like to talk about the problem. There really is a big problem for companies today that they’re experiencing right now, and that’s taking all of this video content and making sure that it’s managed, organized, searched and categorized.
It’s really, really difficult to do that, especially now that we’re in this hybrid work environment. We’re all working from home. We’re basically, just as I’m doing today, I’m working from home, but wherever we are, there’s so much content that is being created and it really has exploded. It’s grown exponentially, especially in the last few years, and the pandemic only accelerated that trend.
So it has become a problem, but thankfully there is a solution, and that solution really is clarity. It’s not just adding more technology or more complexity, it’s all about simplicity and simplification. I think that’s all … At least, speaking for myself, that’s what I want, and I think that’s what we all want.
So that’s what AnyClip is here to do. That’s what expert.ai is here to do. But before I get into how our two companies work together, would like to show you a bit of a high level as far as how our products and our technology works.
So I mentioned visual intelligence earlier. We do see this as converting video content into data. So how do we do that? We take any sort of video content, whether that is long form, short form, a webinar or an event or a movie or something as short as an advertising 15 second spot, the data that’s associated with that, and then applying our own proprietary and patented technology through frame by frame analysis and key frame detection, as well as the best of the best from third party technology companies. So image recognition, speech recognition, OCR, which is optical character recognition or text.
And then from there, once that is done, and this is done automatically when content is ingested into the system, and we do that in an automated fashion, 10 times faster than real time, then the system can identify, and ultimately the user can identify keywords, people, brands, any sort of text. And then once you have all of that, this is where the NLP comes into play for video specifically, the natural language processing.
So that’s at a high level how we do it. We can also apply custom taxonomies, content categorization, brand safety flags, if there’s something that is inappropriate or vulgar, that can automatically be detected with a system, sentiment, et cetera.
So from there, once you have all of this content in a single repository, centralized in one system and in one platform, that’s where you can apply a lot of the cool features and tools and solutions that clients are most interested in seeing.
So it really is, I think, a natural pair as far as what expert.ai does and what AnyClip does. And if we focus on the strengths of both companies, and we won’t go through every single one today, but just know that expert.ai is great at natural language processing, natural language understanding for text. So whether that is documents or articles, identifying and creating the relationships between those data points or those entities, tagging it, that’s what expert.ai is really good at.
And then AnyClip is great at what we mentioned earlier, which is the analysis, data identification, and then experiences when it comes to video. So if you’re a large organization or a large company that has thousands or tens of thousands or even millions of all of that content, so documents, articles, videos, et cetera, you can now utilize the best of both worlds, the best of both platforms, and have all of that organized, centralized, and then be able to take action very, very quickly on all of that information.
So that’s what I was talking about earlier as far as the chaos and the amount of content out there that has really exploded. You can imagine extrapolating that across your global organization, we really want to make sense of that and then be able to utilize that information really, really quickly.
So once we’ve done that, this is the result, you have unstructured information or structured information and data and content, and then you’re bringing it all together to make it manageable, discoverable, relevant, and I think most important, actionable, whether that is a call to action as far as making content shoppable, driving people to download white papers or new information, directing them to other platforms or keeping them on your website. From an external standpoint, it really is important to create that customer journey for them and provide a way to interact directly with customers directly through your content.
So this is what we do today with our platform, it’s called the Genius Platform. It’s both for external communications and internal communications. For external communications, our solution and our platform is called Genius Plus. So this is primarily for marketing, for IT, digital and innovation groups as well as events.
And then internally the platform is called Genius Work, and the key stakeholders and the use cases there are really around also IT as well, but think HR trainings, any sort of important meetings that you have within your organization when the CEO or the CFO or CMO or other executives are speaking about important information, being able to go right down to the exact moment where someone said they were talking about the Q3 financial results, or they’re talking about 2023 planning.
Or from an HR and onboarding perspective that could be benefits, whatever that is. Also, sales calls we use this a lot. We use this platform every day at AnyClip. You really do want to be able to identify important information really, really quickly. And you can’t do that unless you have everything that we talked about earlier, which is the analysis and the data in the feature set.
So something that I am very excited about is our new language capability, and that is to … There’s a couple different options and a couple different, what I think are compelling examples as far as how clients can utilize this.
And so you can choose from multiple options, automated video on demand captioning, human created video on demand, so you still have and will always have that human element, and then also live closed captioning. So that does improve the content search, the brand recall, the categorization, but it really is meant to regionalize for different markets, whether that’s around the country or around the world.
And I think the coolest part about it is being able to automatically translate captions and English spoken word into 160 plus different languages. So if someone is speaking in English and your native language is Spanish or Chinese or Japanese or Italian or Hebrew, whatever it might be, the user can really quickly, as you can see in that example, click on the video, select their language, and then the closed captioning is going to be in that native language.
So pretty amazing. It’s almost like being able to switch any English presentation or any English content directly into any language. And I think of it almost like subtitles on a movie, but for business content. So we’re very, very excited about this. It’s right in line with the themes of what we’ve been talking about as far as personalization and how things are so dynamic and constantly changing.
And obviously if there is anything that you would like to hear more about this or how it could be applied to your business, don’t hesitate to reach out. But in the meantime, I would love to see if there are any questions on what I went through either from the audience or Brian from you directly.
Brian Munz:
Yeah, no, I mean, I have a few things that came to mind as you were talking, the main one is just to put it into real world use cases of what’s the most common use case you see in terms of where it’s the highest impact. Because I could think through and come up with some in terms of I could imagine this being pretty big with training and education, as well as of course just communication, PR, et cetera. But where do you see the most need for either part of this, A, the stuff that you mentioned, as well as the analysis of the text once it’s extracted and used in your tool and analyzed?
Alex Baker:
Yeah, great question. I think of it again from an external perspective and an internal perspective. So if I’m a marketer, whatever my KPIs or goals and objectives are, that could be as simple as revenue and selling our product or service or something more, I guess nuance as far as the performance and the analytics of the content by geo or by device type or whatever it might be.
So I think that that is one of the strongest use cases is from a marketing perspective for external communications and really driving conversions and results and making all of your content shoppable, as well as life after live.
So Brian, you and I were talking before this live stream started and before this webinar that we’re getting close to the holidays here. People are in different time zones, they’re very busy with their professional lives and with their personal lives. So there’ll be probably a much smaller amount of people that watch this live, but then most likely a much, much larger amount of people that watch this or something like this from a video on demand perspective.
So being able to search through that content, go back, find the specific pieces where I’m talking about NLP or about events or whatever it is that’s important to you, I think it is, when we think about everything that we have going on, being able to just go and find that information really, really quickly and then either take action on it or go on about our day after we’ve consumed that is super, super impactful.
So from an external communications perspective, I do think that really some of the strongest use cases are around marketing and actual conversion and analytics. But from an internal perspective, from what we call Genius Work, it’s all about the efficiencies and making people’s life easier.
From your standpoint, being in product, I’m sure there are a lot of discussions around product roadmaps and collaboration across teams. So within a platform like this, you can go right to that exact moment when that slide is up and then comment on it, and then you’re instantly collaborating with your colleagues and they’re going right to that moment. They’re not spending hours and hours trying to find exactly what it is they’re looking for.
So I do think from a meetings perspective, that’s one of the biggest use cases that we see, also internal events. But then really just … Excuse me … being able to save time throughout your day. We don’t want to spend a ton of time going back and going through old documents and rewatching things. We want to get that really, really quickly.
So what we’ve done is taken some of this, the technology that we talked about earlier as far as NLP for video and the visual intelligence and AI technology that we have, and created highlights. So you can actually have a webinar or a team meeting or whatever that might be, instead of having to watch the 45 minutes or the hour or whatever it was, you can go right down to a few minutes and just watch that, and the technology is able to do that for you.
Brian Munz:
Yeah. I mean, it makes sense even in the context of these live streams we do. I could imagine there being a use case where if there’s particular topics we talk about that you’re interested in more than others, you could imagine a way that it can be filtered because in a way, one of the most, of course common use cases is around metadata. So we’re taking that and extrapolating out the topics, organizing them, and enabling the ability to filter and find, like you said, the content that you’re looking for.
Because that’s definitely something, I’m sure a lot of people, but myself struggled with, with videos where you watch something and you’re like, “I don’t know where it is,” and you’re trying to remember when you heard something that was said at a given point. And it drives me crazy sometimes, but yeah.
Alex Baker:
Yeah, that’s exactly right. I look at that as from a video management perspective, how do I manage that? How do I filter that down? And we’re giving you the tools and the capability to do that.
Brian Munz:
Yeah. Well, and on the somewhat more technical side, I can imagine … So a big concern now is … Well, so there’s SEO, right? So if you’re able to extract out, some technologies will just pull out the text. And so that of course is going to help with SEO, but when you have the text plus topics, plus any other kind of metadata you can have on a page is going to help your results in SEO.
But I would imagine too, accessibility is a concern of course. So you want to make sure that your content is for vision-impaired people and for hearing-impaired people of course, that you can meet the standards that you need to meet, I would guess, but a bit easier if you can do this through AI, whether your technology and ours. So it makes sense-
Alex Baker:
Yeah, exactly. And that’s right in line with what we’re speaking about as far as translating automatic closed caption and multi-language translation. It certainly helps with that. And the video player technology from AnyClip, that’s automatically built into it. So it’s not something that even has to be customized, it’s already built in.
Brian Munz:
Right. We have a question here, I’ll put it on the screen. Can a customer train your algorithm to recognize unique words or product names and visuals? Can your system accommodate it?
Alex Baker:
Yeah, Dan, great question. So the way I think about this is that the more of like-content that the platform and the algorithm sees, the better it’s going to be at recognizing those keywords or those scenes of the videos or that text or that audio, whether that is specific to a product or to people. If it’s a famous politician like President Joe Biden, obviously that’s going to be identified, but if it’s someone else, it’s going to get better and better at seeing that.
And it functions very similarly for other things outside of people, whether that be brands, logos, products, et cetera, visual elements. So yes, it can do that, but just know that over time it’s going to get better the more that it sees.
Brian Munz:
Makes sense. We’ve got another question here. Let’s see. How much time does it take to convert a video such as a company presentation to AnyClip smart video?
Alex Baker:
Juan, the simple answer is 10 times faster than real time. So if you have, let’s say it’s an hour-long video and that’s about 60 minutes, it should take approximately six minutes. So 10 times faster than real time.
And this also brings up a good point, especially if you have a lot of content that is on YouTube or other third party platforms, we have bidirectional integrations with those platforms. Or if it might be something where your content is stored internally or on a DAM, a digital asset management system, whatever it might be, we always work with our clients to try to automate that so that you’re not downloading and uploading. We really do try to automate that process as well, wherever we can.
Brian Munz:
Makes sense. So I don’t see any more questions coming in, but I for one, definitely found this extremely interesting. It’s always really cool to see as things like this get more automated and things that are able to … It’s a huge influx of video. Anyone who has kids can tell you that especially. And to see an ability to use AI to of course extract and analyze is super interesting. So definitely found it really interesting. So thanks for joining us and hopefully you can come back in the future and talk some more about what’s going on.
Alex Baker:
That’d be great. Yeah, I’d love to do that. And Brian, thank you very much and have a great day and happy holidays.
Brian Munz:
Thanks, you too. So yeah, speaking of holidays, our schedule is going to be a little bit less regular, I would imagine. So just follow us on Instagram … Instagram? LinkedIn. I’m not sure if we’re on Instagram … to see when we’re streaming next. But I hope everyone has a good holiday and I will see you then. Thanks.
Alex Baker:
Great. Thanks.
Brian Munz:
Bye.