cancel
Showing results for 
Search instead for 
Did you mean: 
urvashi
Community Manager Community Manager
Community Manager

Key Takeaways

  • Hugh Neale and Lindsey Thompson talk about integrating Artificial Intelligence (AI) into JumpCloud's offerings, underlining the importance of choosing the right AI tools and considering data privacy.
  • Hugh Neale highlights the evolution of AI and machine learning, stressing the significant advancements in the last few years and their implications for Large Language Models (LLMs).
  • The discussion addresses the limitations and challenges associated with LLMs, such as data size constraints and handling complex queries.
  • They continue with a demo of JumpCloud's AI platform which can convert natural language questions into SQL queries to get information about your JumpCloud environment. 
  • The team outlines future developments and the importance of community feedback in enhancing JumpCloud's AI platform, including integrating more data sources through the EA program and discussing longer term feature requests. 
  • For more insights, we encourage you to check out the full video of this engaging discussion on AI and its integration into JumpCloud's services.

Introduction

In this week’s episode of IT Hour, JumpCloud's Hugh Neale and Lindsey Thompson come together to explore artificial intelligence and its possible use in the context of JumpCloud.

Hugh, from the JumpCloud Labs team, brings a deep understanding of AI, cybersecurity, and compliance, while Lindsey from Product Management brings her insight into AI's practical applications and challenges within JumpCloud's product ecosystem. 

The discussion covers the evolution of AI, the complexities and limitations of Large Language Models and potential applications for use within JumpCloud’s platform. This conversation provides a nuanced view of AI's role in reducing the barrier to entry for querying data and the road ahead for AI’s integration into the JumpCloud platform. 

Full Video + Overview

The IT Hour | Exploring Artificial Intelligence 12.01.23

Speakers

Hugh Neale, Principal Engineer, JumpCloud

Lindsey Thompson, Director of Product Management, JumpCloud

Becky Scott, Head of Technical Community, JumpCloud

Urvashi H.V., Technical Community Champion, JumpCloud

Transcript

Introduction

Becky Scott:

Alright, let's jump into our main topic. We're going to hand it off to Hugh and Lindsey and let you. Well, first off, Hugh, since this is your first time on the IT hour, why don't you go ahead and tell us a little bit about what you do here at JumpCloud?

Hugh Neale:

Thank you. Yes, so I joined JumpCloud back in late February of this year, 2023, following an acquisition of my company Zercurity, which was a cybersecurity company mainly focused on Compliance using Osquery and a number of other open-source platforms. And since then come into JumpCloud as part of the Labs team. And our mandate is basically to see interesting and new innovative technologies, products, ideas that are floating around on the internet. Can they be relevant to JumpCloud? Hence, Artificial Intelligence is a big one at the moment. But yeah. Really trying to come up with those bigger moonshot ideas, which might not always land all the time, but it's always good to try and figure out whether these things make sense or not. 

Lindsey Thompson:

So frequently asked questions, AI is making the rounds, and the news people are trying it out, you're seeing it pop up in different products, and we want to not just talk about AI but talk about AI within the context of JumpCloud. What are we doing with this new technology? Why do we feel like it's worthwhile to invest in AI? What are the right tools? Because there are quite a few approaches you can take within the realm of AI. And so we're going to talk a little bit about that specifically, Hugh, is our master of that technology. Do we have anything to show yet? And things are like, “Hey, AI is using data, it's learning. How does that relate to our data and the ownership or privacy over data relating to your particular organizations?” So those are some of the topics that we will cover in today's conversation. And again, why AI? I'm going to let Hugh give you a little bit more context around AI.

Hugh Neale:

Thanks, Lindsey. So AI or Machine Learning really isn't that new. It's been around for a very, very long time now. Big things that have changed is computers have got a lot faster and there's a phenomenal amount of data that we can now train with using those much bigger GPUs and memory. And this hit a bit of an inflection point in 2017 when the idea of Transformers came into the mainstream. And this is basically what open AI figured out and saw as a transformative leap in how we do Large Language Models or AI as we know it, moving forward to Artificial General Intelligence (AGI) in the future. So that's been really interesting, those big developments, but how can we use it?

AI in the context of JumpCloud

Lindsey Thompson:

Yes. So the product person here is like, “Oh great, it's exciting tech”. And I think, like I said, a lot of organizations are starting to, a lot of products are starting to introduce AI to their products. I don't know about you, but there are some areas where I've seen it be effective. And then there are a lot of areas that are a little bit, “I dunno, I don't trust it yet, I don't trust it, or I'm not sure how to engage with it”. And those are some of the considerations that JumpCloud considered when we were looking at how we might want to use AI. But if you think of what AI is capable of doing and some of the Natural Language Processing, a lot of it comes down to being able to engage data that lives out in the world in a more natural way.

And we will talk more about that. But when we thought about our product and what data people want and need access to, it became very clear pretty quickly what areas of opportunity we wanted to start to dive into. So, just in the realm of insights, and I'm speaking specifically to System Insights (SI) and Directory Insights (DI), collectively, within this year, 150-plus organizations have submitted feature requests. So when I tell you we listen to feature requests, we absolutely do. And some of what we look at is the volume of requests that we're getting in certain areas to understand the kind of pain or friction or opportunity that we have. And so, here's just a sampling of some of the requests that do relate to “How can I query the data better?”. So for instance “It'd be nice to have a possibility to collect System Insights data per device from the JumpCloud Admin Console directly. But at the moment, the automatic frequency is not that often and so it would be nice if we could trigger the update from the UI, per device”. 

“I want to use System Insights to see if a device has Firefox installed. I can use SI to check a specific device, but I cannot use System Insights to determine how many devices in my fleet have Firefox installed.” So it really comes down to you've got this pool of data here, and you're trying to get at it, but the methods that we are providing you today to get to that data require a lot of overhead from you. You're going and doing a lot of redundant tasks just to extract that data not coming out in the format you want it, et cetera. So these are some of the considerations from a customer request perspective that we do think we can address. Now there are others, I'm not going to lie, not all 150 requests relating to SI and DI are going to be solved by AI. There are depth requests, “I want more information about certain activities, certain events.” So I just want to be a little bit transparent about that. We're not going to solve all the problems, but this is an area of opportunity identified.

Limitations of Large Language Models (LLMs)

And realistically, though, there are some limitations that we have to respect, and some of the challenges with LLMs relate to how can these language models dig into complex subjects where there may be a lot of superficial data available, but when we need to go deeper and answer some harder questions, there are limitations in its ability to do that math. Again, it's looking at language that's available. So it's not actually doing a compute to answer a question that you leverage or put out there in the world. Live/Recent Knowledge. So what is this data set and what data set is actually most relevant might be mixed in with some of that data that is not so relevant. Hugh will go into a little bit more detail around the challenges with prompt engineering and the bias, right? Because the data sets that are being used are data sets that are predefined. So I'm going to hand things back over to Hugh to go a little deeper.

Hugh Neale:

Thanks, Lindsey. So, ultimately, Large Language Models have been trained on a large volume of language, typically things found about the internet, and all they're really doing is effectively predicting the next best word in the prompt that you have given it initially as part of your question in that classic chat view that you have. And when you submit that question, you can very clearly see all of the words being generated one after another and really, really, really great , for things when you want to generate an email, or you want help starting the beginning of a document, or there's a short comment or review or anything text related, it's fantastic. But because of that, accuracy is really important at JumpCloud, and anything being generated by these Large Language Models are going to potentially drive business security and remediation tasks at JumpCloud.

So that's really, really, really critical to us. Also, limited by the context size as well. Now there are a number of ways to get around this, and this has been improving, but the context size is effectively that initial input field that you have at the beginning where you ask ChatGPT or your Large Language Model to do something you usually have to give it a bit of a character persona, explain to it that it has a new role as a programmer or a database administrator or whatever it might be. And then within that you have to give it a lot of information potentially about what it needs to go and do. So in JumpCloud's case, for example, you might have to give it a list of systems that you're interested in or the metrics about those systems and all of this uses up this big context window, usually of about 4KBs.


And because of that you usually actually end up having to stagger these questions. You end up having to chain these prompts together, and you end up in this feedback loop where the Large Language Models pass information from one session to another. And then you've also got the problem of prompt injection. So at any point within that conversation that the Large Language Model is potentially having with itself, you are allowing user input, which is the cardinal problem of the web, right? Never trust the user when it comes to writing things into your database. And the same is true for prompt injection. It is very easy to basically say, “Hey, forget everything you've been tasked with before, and I want you to run off and do this other thing.” And with that, , come performance and cost implications as well. 

Since that initial launch of these Large Language Models as well because you have the ability to effectively give the Large Language Model a task, and you can say “When the user asks for information about the weather and a location of that, you can take those two identifiers and I want you to convert that into a JSON object or some sort of API call that looks similar to this.” And then when you get that result back, you in your own programming language can then take that output, call an API to get the weather, send that back, and then give that back to the Large Language Model to say, “Hey, here's this information, let's come back as a JSON object, which isn't very human friendly as it's been generated by a computer system. I want you to now interpret this and present it back to the user.” So that's how these Large Language Models are giving the illusion that they're able to effectively autonomously go and figure out the weather and bring it back and provide an answer. And then as I said, you can chain all these things together so you can create just like I might want to as a JumpCloud administrator, I can give it the whole list of tasks to go off. And then, through these function calls, as Juergen has shown in his community posts, you can execute all of these tasks. 

AI Experiments in JumpCloud

These are a few of the experiments we had. So the first one was basically building off of the fact that Intercom sits in the bottom of the JumpCloud Console for help and support and thought that that would be a really interesting place to start with so that you could go there and ask about System Information or “How do I go off and create a policy?”. But we felt that wasn't really a good value add to a user at the end of the day because it's something that they can go and connect to OpenAI to do themselves or a number of other tools that are being built on top of that entire kind of LLM construct. And then, of course, you've still got the problems I've described earlier, which leads me into this next one.

So this is kind of taking that intercom chat window and breaking it out into a more advanced view. So here I'm kind of asking, “Hey, get me a list of systems”, and then “Hey, how much disk space in gigabytes is remaining on that system?”. And, obviously, very cool, but the big issue with this, and going back to the slide that Lindsey showed on some of the user requests that we've had, none of this is being solved in aggregate. So, if I want to instruct a Large Language Model to go and get all of the systems on my network and figure out which ones are running critically low on disc space, the Large Language Model basically has to do exactly what you would do. Writing a Python or PowerShell script, connecting to the API, iterating through all of the systems or devices, then iterating through all the System Insights calls, retrieving all that data, and then trying to format it.

And that's really expensive for a Large Language Model to do. It takes a lot of time. These APIs are quite complicated. And then finally you've got that problem of all this information is being returned, and that's a lot of text for an LLM to digest and process, and you've got those artifacts and those hallucinations which can start to creep in and even System information has been swapped around and everything else, giving completely unreliable results and information and taking a vast sum of time as well. So yeah, just to summarize, very, very good if you are doing a specific task with these LLMs for function calls. So add a task, remove a task, get the weather check flight information. But when it comes to aggregate or more complex API tasks such as JumpCloud and all the data that we have spanning Devices, Identity, and Security, these requests just take a huge amount of time to be planned, and the results look impressive in small tests but ultimately don't really solve a lot of the problems that our admins have. So I'll hand it back to you, Lindsey.

Lindsey Thompson:

We tally these challenges back to our solution and really ground ourselves and what are the needs, what are we looking to provide you? You need accurate information first and foremost. So how can we craft this so that we are working around the limitations, the hallucinations, we're feeding it the right data, timely responses- So you're not going to want to ask a question and wait for hours. You need to be able to get a quick response. You are going to need to access specifically the data that we have within Directory Insights and System Insights. Reporting is something we want to get to. Can we solve through that or solve for that with this? Maybe not maybe, but we know that that is a need. And then simplified and natural access to data is really when it comes down to it. We're asking you to go through hoops today to either construct the right query in a DI UI or go and dig into different parts of our System UI to get to particular pieces of information. And really what you're needing is a more simplified way of asking a question and accessing the data. 

Hugh Neale:

To add to that, a lot of the ways that admins will try and build reports or get data out of the system at the moment is through PowerShell and REST APIs and things like that. And, , those things take a lot of time to run as well, and you've got the overhead of building those things or using JumpCloud solutions team’s to help build those things too, which, if you're after information quickly, it's not ideal.

Lindsey Thompson:

Good call. So, in the solution, we want to use this natural language interface to enable admins to ask questions to JumpCloud directory using plain English to retrieve forms of information out of JumpCloud's various asset and transactional data stores. So we know what those are, our System insights and our Directory Insights primarily. And specifically with no knowledge of SQL or RESTful API. So we're not going to ask you to even go so far as to engage a PowerShell, for instance. The idea is that this is very simple and accessible to all of our IT admins as well as doesn't require the overhead or the time even for those admins that are quite capable.

Leveraging Large Language Models (LLMs)

Hugh Neale:

So going back to what Large Language Models are really good at, Large Language Models is really good at text and generating text, and there are a number of models available that are very good at generating code and you've seen it with GitHub's copilot, a number of the other Microsoft projects in that pool. You can have a little text prompt, and it will go ahead and generate a whole bunch of code for you. Might not necessarily be a hundred percent, but in most cases, it does a really good job of actually generating code, and SQL is really no different to that. And bigger applications out on the internet, so big SaaS companies, which have the same problem whereby they have all of this fantastic data. If you look at one of the big ticketing and code management platforms, they provide SQL-style interfaces to be able to allow that customizability as well as one of the biggest CRM tools on the platform as well. They have these APIs to let you construct direct SQL queries to their databases to get back the information that you need, which is a bit of an old-school way of doing it. And then GraphSQL has a lot of flexibility to give you direct access to the data that resides in those data stores as well. But again, you've got to have real intimate knowledge of these table structures, these APIs and what information's there, how you pull it out, build it together. And, of course, you have to program all this information that you have as well.

The JumpCloud AI Platform

So the idea that we've landed on is it's great to be able to have direct access to the database with SQL, but the problem is having knowledge of the database structures and generating that SQL. Large Language Models, very, very good at generating SQL. So I can take a question very good at generating SQL, and then we can then execute that against the database to retrieve fast, accurate, uninterpreted results from the database that haven't been pre-processed and made ambiguous by a Large Language Model. 

So we have built our JumpCloud AI platform, we're calling it the natural language interface whereby you can ask a question, it'll generate the query and it will go and execute it and bring the results back for you. This is currently in Early Access. So at the end of this demonstration, we have a channel in Slack, and we are also contactable via email and everything else, and we're very, very keen to see what the community thinks about it and guess also feedback and see if this is something that people find useful.

So we have basically built our own Large Language Model, which we have trained for months now on the topology of JumpCloud's database systems, what that data looks like, how it is related to one another. I'll add, just to be clear, this has not been trained on the actual data in the database. This is purely just the actual topology and schema of it. So that when you ask your question, it can correctly identify how to go and query and retrieve that information. So a really interesting one would be the example I gave before where you want to figure out which systems might be running low on disc space. The Large Language Model has to come up with the correct query to go and do that. And without being pre-trained, it might go and find what it thinks is probably the right tables to go and query. But there are some subtle differences and language that's used when it comes to IT administration that a generic Large Language Model wouldn't be able to guess.

Demo of JumpCloud’s Natural Language Interface

So with that, I'll kick over to a demo. So we've set up JumpCloud AI before you all run and try to connect to it. It's internal only or unless you've been invited to the Early Access program. But, essentially, the way it works is you can click login, and it'll take you through the Admin Console login process, and then, well, actually, I can share this, and then we can connect to that. We provision you an account, and once it's gone through the process, you'll then kick back to the previous page, and you can then actually query all of the data that you have in your JumpCloud tenant. 

So what can we do? So we can ask really simple questions. So, “How many systems do I have?” And we can click ‘Query’ and you can see immediately, a very simple query that's run. Instantaneous results and accurate results too. And counts are great, but “How many systems by OS?” And I can get back a list of systems by Operating System as well as a count, but maybe I want version information as well. I can get the versions as well. So I've not had to dig into anything and run around, I can just quickly ask a question and get a result. “Which system has the least amount of disk space?”

So much more complicated query sifting through all of that and giving you a simple result back or “Which systems have FDE?”. So FDE - full disc encryption, it might not necessarily know that that's what it relates to in some of the other models, but because trained this one on what that terminology means, it is able to go and get that information out of the database and bring it back or “Which users have 2FA enabled?” Okay, well no one does. So that's something I need to go in and sort out. 

And this works with “Get me a list of programs”, “Which systems have Chrome installed?” or “Which systems across Windows have Chrome installed?” or “Which ones don't have the latest version?”. It’s open ended. And that's we've got to a point now where we've had this model, we've trained on all these questions, and we're in a good place with it, and that's where we want to start to open it up to customers to start to see what questions you guys are asking and improve the model further before we can get to a point hopefully where this ends up in the product as part of a global search and you can get the results that you need, export them, perform remediations, do reporting, and beyond that there's lots of really interesting future development that comes out of this as well. But I shall pause for now and see if there's any questions.

Early Access and Feedback

Becky Scott:

We see a lot of, “I want to test”, “How do I test?”. Alright, so let me add the Slack lounge link. So at the bottom there, you should be able to click that to get over to the Slack lounge. The channel is #jumpcloudai and Keith, to answer your question, that is internal only and you have to be given permission to get into that. So that's why once we get your volunteering we can get you into that.

Hugh Neale:

Yeah, this is completely separate from JumpCloud. It's hosted on a separate domain and the whole part of that login process is to basically make sure that you're giving consent and your data's being accessed by this new system.

Lindsey Thompson:

Yes. So exactly what we're looking for, all of these people are excited about it, we want to get your hands on this and we'll be working to make that possible. But just knowing how many people actually want to try it out is valuable in and of itself. For those that participate, we will request that you engage with us to provide direct feedback around what you experience, what you would like from it that you're not able to see in the POC, so on and so forth. And it's not something that we're ready to, I think turn on for an organization tomorrow, but early 2024 is around where we're hoping to get this into hands. 

Hugh Neale:

Yeah, just really get an idea of what the appetite is for this sort of thing at the moment and sit down with a few of you who are really interested, just get on a call to really understand what you think, how this might help. Certainly run through some of the questions that you have through the demo and then when we're in that state we can really move it out to EA and invite everybody to use it. There's a few more systems that we want to connect to it as well, because some of the internal testing that's been done, people will ask questions that just the data's not there in the backend of the system to answer that question. And that's very disappointing when you've got expectations that it can answer all the questions and it doesn't. So, just connecting a few more data sources in, but when you guys get your hands on it, you'll be able to just ask as many questions to your heart's content.

The Potential of AI in IT

Hugh Neale:

Jeff made a good comment. Provided this stuff goes well and you have the ability to use natural language to construct queries to get the data that you need, it becomes a natural way of then taking that data and feeding it into other Large Language Models or remediation steps or whatever it might be or even things like dynamic groups and it becomes less about having to figure out how to filter and do all of these things and comes down to just having to describe these things in words and have it provide accurate information.

Lindsey Thompson:

So there is a question here and I want to invite more, but “This might have been mentioned, but will JumpCloud be deploying the ability to grab System Insights in the moment to grab up-to-date app information?” So I'm assuming software applications is what we're asking about here.

Hugh Neale:

So you can ask us at the moment. So I go back to the demo, we can just say “Get me a list of installed programs” and you've got a list of all the programs there in their version and you can break down onto Windows, you can say, “Hey, give me an aggregate of what are the most common applications, which ones are out of date.” All the data's there, the natural language to SQL, and anything that you can do in SQL, you can just construct the question to create that SQL and it'll go off and query it. It's extremely flexible.

Becky Scott:

Well this is great. Any other questions? So we have it on the recording - go to the Slack lounge and go into the JumpCloud AI channel newly created, and you can say, “please enroll me”, and we'll reach out to get the information we need get your official declaration of yes, we're going to do this and go from there.Awesome. I think those were the only two.

Will AI Take Over IT Jobs?

Urvashi H.V.:

I have a question. Am I going to be out of a job? Is this going to take over my job? Hugh and Lindsey is the Technical Community Champion just going to be an AI bot now?

Lindsey Thompson:

I don't think we're there. I don't think we're there. If you take a step back and look at AI really broadly I think and machine learning as well, there are a lot of perhaps exciting opportunities to make decisions on behalf of IT admins, but I don't think we necessarily can claim smart decisions in this moment with all of the technical limitations that we've called out.

Data Protection

Becky Scott:

Nathan's saying “I think with all things AI, one of the biggest concerns is data protection. Is this circumvented because the model itself is not storing the actual data but instead only generating queries to access it?”

Hugh Neale:

So, very good question, and you're a hundred percent right. So not only is this JumpCloud's own model, so no data is heading off to OpenAI or any other Large Language Model vendor, but yes, because we're just generating the SQL itself, that's the only thing that's generated and that gets sent to be executed in the exact same place where all the data already resides at the moment. So, no data is being copied and sent out to anywhere else to be interpreted or anything else you are just getting your results through, well in the future, through the JumpCloud console.

AI Hallucinations

Becky Scott:

Hopefully that answered what you wanted to know, Nathan. Yeah, this is exciting stuff. I think it's going to be really interesting to see where it goes and where it develops from here. I think some of the things you mentioned around accuracy and keeping it from hallucinating and swapping information between systems and things like that will be maybe one of the challenges, like it not making stuff up over time. If you've got a really long list of things that you want to ask and I guess over time trusting the information it gives you that it's not swapping some things or things like that. I guess that'll be where it'll be difficult for some people. I mean have you talked through any of that yet?

Hugh Neale:

So no, because it's executing against the database directly, it won't interpret or do anything with data. You are just getting the raw output back from the query that you run and because it's just being run against the database, it's fast and accurate and live as well. So it's the latest representation of that data at that moment. The big use case for me having worked in prior roles in security operations as well, there's always someone running up to your desk and going, “Is this patched?”, “What machines don't have this?”, and you've then got to spend 10 minutes to an hour compiling some sort of report to figure that out. And having the ability to just ask a question, have that be transformed, executed against the database, and have that real-time information back, I think, is going to be massive.

Using JumpCloud AI for Reporting and Compliance

Becky Scott:

That's good, that's good that it's just referring directly and not doing anything more. Nathan wants to know, “Is there a way to then export the results to say CSV, XLS ,or PDF?”

Hugh Neale:

Yeah, we'll have that. 

Becky Scott:

That's good. That's really good.

Urvashi H.V.:

I have a follow-up to the download to CSV. Do we have any long term plans to make charts and reports and things like that?

Hugh Neale:

Yeah, so this experiment proves successful then yes, we're going to do all of that. The things that I'm really interested and excited about are things like trending topics or things like that. So when you come in the morning potentially there's already questions already there for you to answer. If there's been another vulnerability discovered in your favorite app or whatever it is, that query is there ready to run immediately and you've not got to go and think about it or build it or do it. It's just there mixing that in with all the reporting and everything else, I think it's going to be very powerful.

Urvashi H.V.:

I think from a compliance perspective also, there's been a few conversations in Slack Lounge and in the community about how do I know which devices are compliant with which protocols and government guidelines and things like that. So I think that'll also be really cool.

Hugh Neale:

So yeah, you can ask which devices are failing policies and you can run those queries on aggregate as well to get counts of the most common failing policies and why they're failing.

Becky Scott:

Yeah, Nathan's saying “Reporting has been a pain point for a long time and as Lindsey alluded to, being able to handle that need via AI would be really helpful.” So, hopefully, that works out.

Lindsey Thompson:

Hugh comes from a compliance background and so when we think about reporting, there's reporting against say, “Hey, are all of the policies that I've applied actually applied? Are they functioning on my devices?” And then there's a different question of “Is it meeting a certain standard?” And so I want to just kind of call out: are we satisfying whatever policies you have constructed versus a government benchmark for instance is two different ball games and when we get into the big C compliance there are more considerations, there's more data that we would have to be tapping on. So I think there will still be a need for you to translate what data you need today to answer those compliance questions. So it's actually a hot topic. I just came out of a week long 2024 product leader summit and compliance was a hot topic. I think the first step is making sure that it's easy for you to get the data that you need to go and answer those questions.

Becky Scott:

And there's a conversation going on in the chat where Urvashi mentioned MSPs needing reports in the MTP and Jeff says, “Yes please, because he's saying I find a problem, I need to fix it. Then set that to automatically remediate going forward. So maybe offering up recommended policies to apply based on non-compliance and things like that.” Really some automation for that compliance piece is what it’s sounding like. Running some of that to find it and then say, “Okay, now that we found it, let's just, going forward, automate it so we don't have to manually do this again.”

Longer-Term Goals

Hugh Neale:

So this probably won't be in the initial release, it'll just be the ability to export to CSV and PDF etc.. But we know what types of information are coming back based on the table schema. So if it's around devices, systems, policies, the identifier’s there, we can easily link in and create tasks and hot tooltip buttons to go off and do things and remediate and take action. So that's going to be the first easy, easy step. And then starting to think maybe of AI longer term with trying to chain these things together once it's come along a bit more, might be the latter.

Lindsey Thompson:

And recommended policies based on compliance isn't necessarily something we have to have AI answer for us. I just want to put that out there as a conversation point that was touched on in our product conversations. So we know it's a burning need for some of you, and we want to make it easier but what is the best way we can make it easy today, tomorrow, and then eventually get to a more complete compliance solution.

Becky Scott:

And what Nathan's saying is they would want reporting specific to them, not necessarily based on compliance. So I think the point is that you run a report for things that you need and then whatever it is to bring it into compliance for your needs and then even if it's a day-to-day type thing and then set it to where it's automatically remediated from that point forward. So I don't think it necessarily has to be big C compliance. It could be compliance with your policies so that you can easily remediate it and automate it going forward so that you don't have to run those reports once a month and fix it. If you could just set it, say, okay, now we found this problem, let's just make sure that going forward, it runs and fixes it on its own. So I think something like that is what I'm hearing might be helpful. 

I like Urvashi’s suggestion of that. Yeah, we'll do a Reporting Wrapped like Spotify. You reset 239,852 passwords this year, 155 devices refused to upgrade their OS until you showed up at their door. There you go. Alright, and with that, we are at our time, so thank you all for the great questions. 

Remember, go to the lounge if you didn't already. I already see a bunch of people have jumped in. I think we're up to nine people already, and most of them have already raised their hands, so that's great. But you can do it anytime. Jump into #jumpcloudai and raise your hand if you want to be part of the EA. Thank you, Lindsey and Hugh, for the great discussion. Thanks for the good conversation in the chat. We will see you next week. So thank you, everyone. Hope you have a great first weekend of December and we'll see you again next Friday. Take.

Version history
Last update:
‎01-03-2024 12:16 PM
Updated by:
Contributors