UiPath: Leveraging Test Suite in Your Automation Program and Beyond

by | Feb 2, 2021 | Artificial Intelligence (AI), Automation, Future of Work, Intelligent Process Automation, Robotic Process Automation (RPA), UiPath

This post includes the full Transcript of Automating Testing in UiPath Test Suite from Ashling Partners Webinar on January 7, 2021 and the YouTube video of the entire educational webinar: UiPath – Leveraging Test Suite in Your Automation Program and Beyond. Watch full YouTube video here.

Transcript: Introduction

Kevin Huggard:

Thank you for joining us. We’ve got a joint webinar today between us and also UiPath, to talk about using the UiPath test suite, and kind of introducing the overall functionality of one quick kind of introduction, in terms for Ashling partners.

 So we are a consulting firm, we are partners with UiPath. Primarily what we do is we educate we build automations are probably more importantly, where we’re in this model for improving, improving sustaining and iterating. Which is one of the main reasons why we hold webinars like this one today is to really drive overall awareness for emergent solutions and concepts that are coming out and being introduced, here with UiPath.

My name is Kevin Huggard, the Vice President of Intelligent Automation, here at Ashling Partners, and also the principal solution architect, working with primarily our larger partners, and clients to implement process automations, and machine learning models. We do have two other folks joining us. One is Evan Bruns, who is the senior pre-sales engineer and also test suite SME. We’ll be covering a little bit more of kind of the demonstration of the UiPath test suite. We also have Matt Holitza who is the product marketing lead and also the UiPath test suites lead for UiPath.

Webinar Agenda

Kevin Huggard:

So in terms of an agenda today, so we’re kind of talking through, I’ll go through some perspective of how we look at enterprise level automation programs. Kind of do a double click into the how testing plays a part in that overall automation sequence. And then we’ll dive a little bit deeper into the test suite itself and obviously do a little bit more of a demonstration there. Then we will try to save some time for any type of question and answers that we have.

Automation Process

Kevin Huggard: 

Starting with the automation process itself, we have a sequence of steps or stages that we go through. When we look at automations, we move through a planning phase into design. Once we’ve designed an automation, eventually we get into the build. And then from the build, we go train test, and then it moves into production and eventually move into sustain. The interesting pieces even though it looks like it’s all on the same level, the overall effort of the automation process for each step really varies. For those who are familiar with more kind of lean, lean design concepts, obviously, level loading, stabilizing the effort between each of these stages actually helps to bring a little bit more process harmony, which is where kind of this this testing sequence can kind of come in here.

So if we start to kind of look at the overall the overall effort for each one of these stages, we do kind of hit different peaks and valleys and kind of rolling hills throughout the process from start to finish. And not only is its effort concentrated between one individual but we’re also talking about groups of individuals.  The concentration that we’re looking at today and talking about today really has to do with moving from development, testing, building into production and eventually support which I want to kind of bring our attention to. There’s quite a few roles and people involved in stages three, four and five here. Which is driving even more importance for kind of that coordination between those steps and stages. For double click a little bit deeper into the testing how testing plays a part into this overall effort, we would see that there’s, there’s really three primary pieces or points in time where testing plays a part as a factor.

The first stage is really during build, where we’re doing, the developer, specifically is doing a lot of short cycle testing. And it’s really kind of asking the questions of, you know, are my connections working? Am I getting the value is it being returned back, so pumping up the database, or hitting an API is information, feeding into the process correctly. And if I have any calculations of those calculations begin being constructed correctly. So we kind of call this short cycle testing. Within that build phase that happens, typically just within the developer themselves.

The second point that really testing hits the factor, obviously, is during the testing phase. This is obviously larger, testing levels, where we’re doing both user and user testing, or excuse me unit and user testing. And we’re trying to answer several different questions at this stage. Right. So is the is the automation working? If we’re having exceptions? Are they being handled correctly? And are really are we really hitting that desired outcome, that were that was that inherently at the design of the process itself. This gets really interesting from not only the side of the effort, but also has a lot of people involved at the stage and staff. Typically, we have individuals from the business that are supporting testing data and working to accept the end user testing. But we also have people within disciplines of the IT side, that are making sure that we’re following certain standards and protocols.

The third stage here is where we’re, we’re hitting into kind of a troubleshooting stage during sustained, so we have a process that, that we’ve created, that we’ve automated, that we put out into production. And as it’s running, we may be having some issues, maybe the environment of the application changed. Now, obviously, this is where the testing side starts to apply. The reason we are kind of laying this out is as easy as having a map and get a little bit deeper into the introduction of this is we just wanted to provide some orientation around where we’re seeing some of these testing challenges within the overall automation process. And really how this kind of plugs in, to the enterprise. Before we move over, I obviously one other item that’s certainly important and sometimes becomes an afterthought is really like testing requirements and the overall evidence right. So IT standards and software development standards, internal to organizations today already have standards. As we start to bring automation and COE into our clients, the more that we can adopt and follow the norms, the testing norms within those organizations. The easier the adoption of RPA and process automation becomes.

UiPath Test Suite

Matt Holitza:

Before I get started, I want to  introduce myself really quick. So I’m the product marketing lead for test suite. But I know that kind of marketing thing kind of scares people. So for the first 10 years, before I got into marketing, I actually ran a test automation team, built a framework. So now I kind of understand kind of the way it works and everything, why this kind of made a nice marriage between this role and what we’re going to talk about today. So before we get before I get started with the presentation, I want to kind of show you this is kind of a tangential video. But I kind of think about it as you know, when you when you’re watching it, think about it as if, you know, it could be applied to more of an automation, enterprise automation situation.

Video: “ I know when we see Falcon nine, we get to talk a lot about reusability. Everyone is very familiar with the first age coming back down and how cool that work looks. Dragon is also designed with some reusability. In fact, you’ve flown dragon multiple times or you’ve flown dragons multiple times to the space station. How does reusability factor into dragons lifecycle? Yeah, so I’ll let you talk a little bit about both because I one thing I think everyone always thinks about reusability. And hey, it saves money, or that’s great, but reusability actually improves your reliability. So when we get Falcons back, and when we get dragons back, either after one mission or multiple mission, we can do all these detailed inspections on them. And that’s super important. Because when you fly a vehicle, you can only have so many sensors on it, you can’t put a sensor, you know, every single inch of a rocket or a spacecraft. But so you know, especially for rockets that wind up in the ocean, some people don’t have any idea of what they actually went through. So the fact that we get all the hardware back, we were able to inspect literally every square inch of it and make small design changes that actually improve reliability for the whole fleet. So even though Bob and Doug are on a brand new rocket, and a brand new spacecraft, that those spacecraft are actually more reliable based on the knowledge we’ve learned from reusability. We’ve walked people through a lot of the new systems on dragon.”

Matt Holitza:

So because I was watching this with my son last summer, when the first manned mission went up with SpaceX. And, you know, I saw this, I was like, wow, this is exactly what we’re kind of trying to do at UiPath with enterprise or fully automated enterprise automation. And so, that’s one reason why we built UiPath was trying to expand beyond what, you know, RPA, and extend into the testing world, and even into the IT world, you know, as Kevin mentioned earlier, you know, is an important place for automation as well. At  UiPath, one of the first things we did when we started to build test suite, is we asked our customers, what makes your, your automations, or your robots fragile. And I like to kind of point this out from the example of a real example. So let’s think about an onboarding automation, which, you know, kind of everybody is familiar with, everybody gets on boarded. But there’s lots of moving pieces. And so when we ask our customers, what challenges they had, it kind of fell into three buckets. You know, and I think Kevin talked a little bit about this earlier. But the first one is, of course, building, you’re building your workflow and your robot, you know, in a good way, where you know, when things change when objects move, you know, when updates are made, you know, it can address those things. And so that’s kind of things in the, in the, in the workflow itself. But then when we dug deeper, we also found that there’s some problems when applications change underneath them. So if workday has an update, or if SAP has a change, you know, will the war robots keep working. And so that was another places where automations would break. Then the last one is environment changes. So environments change on a pretty ongoing basis, you know, patches are applied. And even just a small patch can take down a robot. And you’re talking about environments, you know, internet connectivity to your SAS providers, you’re talking about internal servers. And so our automation is going across all those things.

So when we when we start to scale this up, and we have many, that was one robot, we begin to see if we are not building our robots in an appropriate way, and we’re not testing them appropriately, our teams are going to start to have to deal with a lot of maintenance. I kind of call this a whack a mole problem. So if you’re putting out robots, and you’re having to kind of fix them on a regular basis, you know, you’re playing whack a mole with the robots. And your team is then sucked into that and not, and increasingly doing more maintenance when we really want them to do more, creating more robots. Applications changes quite frequently, like we talked about environments changed a little bit less frequently. But what we found is that even though business processes tend to stay pretty stable, that these first two areas are impacting them. Then we, you know, what our teams end up doing, like, as mentioned before, is instead of continuously building new automations, where they’re, they’re sucked into continuous maintenance.

 Really what we what we foresee is kind of a common, you know, pulling this, shifting this left, so that we’re addressing these problems earlier in the, in the process, which all good testers understand, you know, when you’re finding issues earlier, and you’re addressing the issues earlier, then you’re going to be, you’re going to have more stable and more higher quality outputs at the end. And so what we want to do is we want to, we built UiPath, kind of to address both the RPA, the robot testing, and the application testing as well, which you’ll see both of these things. And what those do is it those two come combined to create a low maintenance environment, in production for your robots. And then if we go the next slide, the other thing that we’re looking at when we think about a fully automated enterprise, going beyond just RPA, you know, we need to start looking at more of these, what Kevin talked about earlier is more of these collaborative processes, you know, defining requirements, defining test requirements, adding a little bit more rigor into the process, but also kind of looking at a more collaborative approach. And so today, the problem is, is that we have development and testing have their own automations, you know, and it has their own automations in the businesses now having their own automations with RPA. And what we see is all these silos, silos of skills, I was a tool sales of artifacts we want to do is have this one automation repository where everybody can use and if we go back to the SpaceX example, imagine, you know, if you compare a rocket to logging into SAP, if the same, the same automation component can be used for all of those teams, how amazing would that be, if we go to the next slide that kind of shows that, you know, if we, if we’re all using the same platform, and if you imagine, like the onboarding example, as well, part of onboarding is going to be active directory, you’d have to do something with active  directory, you’re going to have to set somebody up there. And it probably already has something for that. And so what if everybody could use the same component for active directory, for example. And then also, taking all the different skill sets of automation from the enterprise, and sharing those, those best practices across the teams, because I think that’s another thing that’s sometimes overlooked. So if everybody’s using the same technology, the same skill sets, and bringing their expertise, you know, test automation for applications, has been around for, you know, 20 plus years. So those guys can give it give us a lot of wisdom and feedback, and really improve the processes and share their experiences as well. So what does success look like for one of our customers. And so this is an example of one of our healthcare provider customers, who is using UiPath, to test some of their Citrix processes. And what this allowed them to do is, they found that they’re enabled able to increase their test coverage by two to three X, because what we’re doing with UiPath, is we’re applying testing to both, you know, RPA, as well as applying a production RPA set of tools to and going backwards into testing. And so some of our advanced capabilities, like object recognition, are going to really help, you know, improve that test coverage. And then also, this customer was able to improve their release velocity by more than two months. And what this what this really narrowed down to is they had one robot that really did the work of three testers, which allowed freed up their other testers to go do more exploratory testing, or automate more test cases, or package test for customers, or do more value added strategy and improvement work. So if we go to the next slide.

So with UiPath, our goal is now to enable the fully automated enterprise, and that’s combining business development and IT operations in one platform. And so really, what we’re what we, the outcome of what UiPath test suite provides is the ability to scale automation. You know, as we mentioned, as Kevin mentioned earlier, you know, really, if you want to start scaling, you have to look at a more rigorous approach, and not necessarily rigorous as more process, but more of a continuous process. And so we see, you know, using sprints Using some of the agile methods that have become popular in software development and applying those to RPA, could really move the needle and help us build more resilient robots. So that we’re able to keep scaling. And so this is kind of a view of the entire UiPath platform, which you may have seen before. And we kind of pull out test suite because it’s a little it crosses many phases. So it doesn’t fall into one of these categories. As Kevin mentioned earlier, it kind of goes across the entire lifecycle, we kind of are starting to think of this as you know, kind of as as maintaining and watching over your automation health. And so making sure that your automations are healthy, that they’re performing well. And, and you know, that we’re, you know, able to add that governance component where it doesn’t exist today. And so this is how test suite is architected. So, if we go from kind of on the right side, we have Studio Pro, which is really where you build the automations. orchestrator is where you’re orchestrating and executing your tests. So you create your tests. Once in Studio Pro, with that set with your workflows, you deploy those orchestrator where you can continuously run those tests, even after you deploy them to production to keep to make sure that things are still working. And then those tests are run by test robots. And then test manager is really the hub where you manage things like requirements, you could pull it in from other external tools like JIRA, or we have integrations with SAP solution manager as your DevOps and ServiceNow is coming soon as well. And test managers also where you kind of can execute your tests, you can do those exploratory tests. And then also, we support manual testing as well. So if you go to the next slide, and to kind of understand how, since we do kind of service to different types of teams, we have application testing teams, and we have the RPA teams. And so the way that the tools mapped to those different roles is kind of shown here, where as a business analyst and manual tester on the application testing side, they would use test manager and test capture, you know, more regularly to do that. And then an RPA developer or test developer would use more of the built in studio and the orchestrator side, and the more technical side of the house. And then everything is kind of ends up results in test robots that are going to be running your processes and testing, you’re testing your processes as well.

DEMO: UiPath Test Suite

Evan Bruns:

Let me go ahead and share my screen here. And we will get started. Awesome, I think everybody should be able to see my screen now. And of course, what we’re looking at is UiPath Studio Pro, which as Matt just mentioned, is going to be the version of UiPath studio that you would use to build these test automations. So to start with, we’re going to kind of run through, you know what the differences are here, look at some of the unique features. And then we’ll go upstream into orchestrator test manager and kind of see some of the reporting as well.

So to start with, you know, I think one of the core differences we’re looking at here, compared to what we’ve seen studio is that I’m not looking at a normal RP sequence, what I’m looking at is called a test case. So if we look on the left side, here, we’ll see my RPA sequences have this little UI icon, Converse, the my test cases have this target icon, this Bullseye icon associated with them. That is really going to that’s really representing how they’re treated differently by orchestrator and test manager upstream, it doesn’t affect a whole lot about what we can do with them within studio. But it’s important to mark our test cases the right way, so that it gets recognized by the rest of the system right.

Now, in front of us here is one of these test cases in particular, and we see kind of the common high level structure of one of these test cases, which is something we call given when then, for those of you in the audience who you know, have done testing before who are sort of familiar with the space, you’ve probably seen this or you’ve seen something similar to it. But essentially the idea is given is going to be the steps I do to prepare to be ready to start testing, when is going to be the steps that are actually, you know, the ones I’m testing essentially.

And finally in the Venn block is where we determine if our test is passed or failed based on the outcomes of the earlier steps. So for example, here, this is a test of the UI double application, which looks like this given block basically is going to make sure the UI double applications open when block is going to interact with that applications case, going to type some numbers into it. Then we’re going to determine if we were successful, which in this case is, you know, did those numbers add up correctly in that UI double application, because in this case, we are testing the application itself is really an application test. Now, to sort of take this up to the next level of complexity, though, we’ve got to think about, you know, essentially, with these numbers being hard coded, how much value does that add to us. And I might argue that, while it adds some, it doesn’t add a whole lot of value as a test case, because it might be this app only works for those exact numbers, or it could be there’s a whole lot of things that might be missing with this particular test.

So I want to look at a couple ways we can improve this test, to have it basically cover more scenarios be more variable, basically, to be a better test all around. First thing we’ll look at in that regard would be what’s called data driven testing. So I just hopped over one tab, and this test is basically the same test except it’s data driven. And what we’re going to see here is essentially, you know, the given when then structure still very much the same, but instead of those hard coded numbers in the, in the web block, we’re now using some variables like cash in on us, not on us, etc. those variables are basically going to define for me, where I pulled numbers from when I want to run this test. So when I created this test, as a data driven test, I had to add a Excel spreadsheet. But now when I go to run that test, I can use basically any line in that Excel spreadsheet as a test scenario. So here, for example, I had five lines in my spreadsheet. And each one of these represents a set of values for caching on us and not on us. If I run this on orchestrator, it’s automatically going to run all of those five scenarios. So kinda, it’s essentially going to cover as many scenarios as it can essentially. Now, the other important thing we have in terms of data generation, and management, would really fall more on the generation side. So we’ve added some capabilities around creating random test data at runtime instead of defining test data beforehand as well. So if I want even a random address, I can just pull in this random address activity, pick my location if I want to, and then I’ll get my addresses and output. Similarly, I can do this with numbers, names, dates, even just random strings or random values. The real key here, of course, you know, I wouldn’t say is the is that necessarily the capability of doing this because in dotnet, we’ve always been able to create, you know, random numbers. Sure. But I think the real advantage, these data creation activities, is there a lot easier to use, and they can also result in higher quality data than some of those more traditional approaches.

 Another really nice addition to a studio that we see here with Studio Pro, would be our integration with postman, for those of you who aren’t familiar with postman, postman is, you know, one of if not the most popular API, integration API testing tool in the world. Basically, it’s just a very simple interface that allows us to build up API calls and test them out. What we’ve done with UiPath, though, is we’ve enabled the ability to basically take contents from postman, and to consume them into UiPath, essentially, his activities. So what we’ll see here over on the left is if I go to my activities, I can look under postman. And it’s actually going to give me a list of calls that I’ve built in postman, hoping I could show you what that looks like to build. But maybe postman is being a little slow today. But essentially, you know, I’m going to parameterize, the different elements of the API call I want. And once I built that up in postman, I can export it and import it into studio. Such that when I want to make a API call like this create load call, I now just have the ability to just get raw inputs and outputs back without having to worry about a lot of complexities inherent to making API calls. So I don’t have to worry about, you know, my getting a response to JSON or XML. I don’t have to worry about, you know, how exactly I’m sending it or what I’m sending all that’s kind of been taken care of in postman. And now I just got to kind of enjoy, you know, having access to this API call and being able to very easily make it.

One more capability that I don’t think we’ll have time to fully explore today, but I certainly wanted to call out is that we are now also offering mobile application testing. So we’ve partnered with Sauce Labs and Browser Stack to bring By basically, you know, mobile emulation. And then we’re able to use appium to basically automate on these emulated mobile devices. And in that way now, you know, we can also test mobile applications, whether they’re iOS or Android.

Now, the last thing I want to show you really relates to how we’re going to be moving upstream here, right. So if I go to my, my project here on the left, you know, as you’re mentioning earlier, my test cases are marked with these Bull’s eyes. You’ll see though, that some of these Bull’s eyes are gray, and some of them are blue. The blue Bull’s eyes that are gray essentially represent tests that are in what’s called draft mode, meaning if I publish the rest of this project, my tests in draft mode will not be pushed on to orchestrator. Conversely, the ones in blue are publishable tests, meaning that when I push this to orchestrator, those tests will be reflected up there. What’s really nice about this is this allows, you know, our tests to kind of coexist in a project without getting in the way. So you know, in this case, right, if I’m writing 10, tests at once, and only five of them are ready, I can just push up the five that are ready when they’re ready. And you know, worry about the other five later on. Or also, this has quite an important. I would say there’s also a pretty core feature for RPA testing.

So when we think about RPA testing, instead of wanting to test you with third party application, we’re actually wanted to test another workflow. I just jumped over here into an RPA project that has main dot zamel, as sort of its core workflow here. And I wanted to take a look at what it might look like to test this. As I was just saying, in this type of project, we’re going to build the test directly in there, we’re not going to make a separate testing project, because we already have an RPA project. So what I’ll do is I’ll find my zamel that I want to test, I’ll right click it, and I’ll say create test case, what that does is it will automatically build a test case relating to that RPA. For me, let’s take a look at this one that I just made. And we’ll get an idea of what this might look like. So by default, it’s going to give me my given when and then block. And it’s also going to invoke whatever workflow I’m trying to test in the one block. As we were saying earlier, right? The when block is really whatever steps it is that are they’re sort of doing the testing, or that we want to test. So that’s why we see the invoke going into one block, because essentially, that one block is the things we’re testing, because we’re testing that RPA workflow, it goes in there. Now. To me, I would say one of the killer features here is what I can do in terms of seeing my test coverage. If I go to debug my test when I’m when I’ve got here, this is of course going to run my RP workflow locally, because that’s how essentially, we’re testing it. But we’ll see here, as I run through the steps, you may have noticed some of these steps actually turning yellow and then green in the background. What that’s representing is what’s called activity coverage. Activity coverage is basically saying compared to you know, activity coverage is essentially saying how many steps of the workflow you’re testing, were hit by your test. So in this case, we hit 87%, because we hit on all of these activities, except for this one log message on the right side of the if statement.

Now, if I was a really good tester, what I’d probably do is I would probably write a couple more test cases to make sure I could hit 100% activity coverage, or even on a more complex project, right, we might just try to get close. But essentially, activity coverage is going to be our North Star for knowing, you know, approximately, how much are we testing? You know, are we doing a good job right now or not? Are there are there big glaring things we’re not testing that need to be checked out. Having taken a look at that, let’s go ahead and look at pushing this upstream. Now. First thing I would do would of course to be mark these as publishable for whatever tests I want. And once I’ve done that, I can press the publish button at the top right of studio. What that’s going to do is it’s going to move my tests up on orchestrator will find them now. So if I go to testing on orchestrator, and go to test cases, I’m going to see a list of every available test case. And if I search for you, I double I’ll be able to find my three test cases relating to that particular application. Now, if I want to actually run these, there’s another step I’m going to have to carry out. I would say this is pretty similar to the production side because on the production side, we’ve got you know, packages which need to be made into a process in order to be run. Similarly, test cases need to be made into a test set in order to be run. So let’s take a look at how that works.

Here on my test sets menu, I can press this blue plus button at the top right. And that’s going to enable me to create a new test set. So I can call it whatever I want. But I’ll pick an environment from which I will pull one or multiple automation projects. In this case, I’ll just select all three, so we can see all of them. But it could be it could just be one, it could be two, whatever, right. Once I get past that page, though, we’re going to see now a list of every single test case available in those multiple projects. So this is a really cool feature. And this is another thing that kind of works alongside that, you know, draft versus publishable thing to make to make test cases easy to manage. But essentially, I can write two test cases in two completely different projects, and then use this, this test sets to be able to still run them together. So you know, this one, for example, comes from CJ CD and a B, this one comes from UI double demo tests, it doesn’t matter, I could mix and match them, run them together, it wouldn’t stop me. Now, I’ve actually already created a test sets, let’s go ahead and run that one, instead of creating a new one here. what we’ll see is this test set has actually seven tests in it, it has the very basic test I showed you at the beginning, it’s got the data driven test, which was the second test we looked at, that’s going to run five iterations, because there’s five different variations we provided. And then lastly, my very last test is one I just wrote, so it would fail. So you guys can see what it looks like when a test fails in UiPath.

I think we are coming up on the last data driven one here in just a second. And then we’re going see that that final one happened as well. Okay, so this test which fails should fail out just a second here. Beautiful. So if I close this down, now, we’re going to see a list of seven tests, six passes one failure. On the past ones, I can, of course, look at things like my logging, when it stopped and started. But I can also view what’s called the assertion, which is going to basically show me Hey, you know, what exactly were we testing? What was the outcome of that? So here, this was the actual, you know, particular thing I was trying to verify. And it also gives me a screenshot of what was going on my screen at the time to try to help me you know, understand really what that was or why that happened.

Now, I would definitely be remiss, though, if I talked about test results without talking about test manager, because while orchestrator does do a good job of keeping this all in a pretty organized manner, I’d say that test managers really the way we want to think about our requirements and our results. In test manager, we essentially start with what’s called a requirement. This is basically something the business wants. And this can either come in from a third party system like JIRA, or Azure DevOps basically get automatically created, like when a when a JIRA story is created, for example, or we can decide to create one of these directly within test manager. Either way, once our requirements in the system, we’re able to assign test cases to it. So if I go to this one here, for example, we’re going to see a list of three test cases that have been assigned to it. And there’s more up here, or rather, I could add more up here, either existing ones, you know, ones that I’ve already built, or ones that are completely new, I could just define on the fly and say, hey, that’s a new test case for this now as well. Either way, this collection of test cases, basically being mapped to the requirement is going to determine sort of what you know, what relies on what in order to be running.

It’s also worth noting that we can, you know, provide some tagging or things like that to keep these test cases and these requirements, you know, further, further organized or easy to search right. Now, other thing here, though, is when I drill on one of these test cases, like this one here, for example, we’ll see that this is actually a failed execution, right? So once again, I see my log data. Once again, I see my screenshot kind of telling me what went wrong. But I’ve also now got the ability to push content back to the original system from which I got the requirement. So if I was hooked up to JIRA, actually, in fact, I am hooked up JIRA. So let’s do it. If I press Create defect here this is going to go ahead, and in my JIRA, it’s going to automatically open up a new defect card basically saying, hey, there’s a problem with this specific test case, kind of here’s why. And then it’s also going to tie it back to all of the all of the other stories or all the other requirements that are associated to that test case. So not only is it going to update me on, you know, whatever story I happen to be testing that day, but it’s also going to update me on older ones that just happened to have been affected as well. That way, whenever there’s a bug or a defect, we have a really, you know, full picture of what’s going on why things are breaking. And we can do more to repair it right. I have one other topic I wanted to touch on here. And then I think I’ll be passing it back.

 But I wanted to talk a little bit about continuous integration, continuous development pipelines, because I think this is another really important element of what we can bring to the table. Right. I actually don’t believe I specifically show this but in orchestrator, you know, we have the ability to schedule our test sets, right? So if we want to run our tests at a certain time of day, every day, that’s really our simple way of doing that. Now. While that can do a lot, and I think there’s certain scenarios where that really makes a lot of sense. I also think that continuous integration can be a really powerful tool in our toolkit. So what ci CD tools let us do is basically, instead of running at a fixed time, what if we ran tests, sort of right when we needed them. So as soon as my code got up, got updated, I run the test to test that code, right. Essentially, we can do this with with Jenkins with Azure DevOps. And basically, what’s going to happen is, when we see our source code getting updated, either on the system we’re testing or the code we’re actually building, we can have Jenkins automatically trigger it. So Jenkins or Azure DevOps. So in this case, for example, I push some code. And so it automatically just ran through this job, which in this case, basically checked out my code from the repository, package it up, ran it to test it to basically make sure it works. And then because it worked, it was willing to deploy it to my actual orchestrator. And then I was able to see my results about that on there. What’s nice, right is we’ve got a pretty tight integration here. So if I want to, if I want to build a pipeline like this myself, actually, not knowing a whole lot about ci CD, actually did build this pipeline, and about, you know, half an hour, essentially, we could just put it together like building blocks, and we’ve got different components that basically lets you, you know, slot together different sorts of steps you might want to do.

 I would say really, to sum it all up, right. Test Suite, especially in terms of a technology perspective, shares a lot of DNA with our other products, you know, Studio Pro is very close to studio, orchestrators, absolutely the same platform. And I think, you know, a lot of reusability, and a lot of the same concepts that apply to core UiPath definitely apply to testing. I think what we’re really bringing to the table that makes the test suite a little different, is, you know, the reporting, the managing and sort of some of these quality of life features like ci CD that we see being really popular among testers.

Matt Holitza:

So that was a great demo, I think he kind of you covered almost all these areas. But you know, kind of the goal, the vision of test suite, and what we’re trying to accomplish is that we’re not just going to do RPA testing, and we’re going to do application testing, we really want them to be kind of an integrated process and part of the continuous testing lifecycle that, you know, testing teams have used for, for a while now for the last 10 or 15 years. And so you can see test suite is allows us to do not only the RPA testing, but it also allows you to do the functional integration regression testing as well. And that’s important as well, because when a component changes, like workday, you may want to looking at it from a complete testing perspective, you may want to do a smoke test on the application test on the application itself. And then run your RPA scripts, so that you can get a report of the potential things that might break because of an update from workday.

When we look at the ultimate goal this if test teams and app and RPA teams use this in concert in a coordinated fashion, what what’s possible is that we can set up triggers in orchestrator to look for changes happening in Azure, DevOps or Jenkins are two examples, then run the relevant RPA and application test sets. And then eventually, on the right hand side here, using the test manager dashboard, our RPA teams and our test teams can go out and do preventative maintenance. So that before something breaks in production, or you know, an RPA breaks in production, or robot breaks in production, you’re able to see what’s going to what’s potentially going to break, excuse me, before it impacts production. So you don’t have that. That chaos, you know, when something comes down, and you have to get the team scrambling to maintain it, you can do that ahead of time. So you have headlights into what’s coming in the future.

Now, kind of going back to the onboarding example, I started off the presentation with what test suite allows you to do is really to mitigate all of these issues, you know, probably not ever 100%, you know, testing is never 100% coverage. But you know, you’re going to be able cover a lot of these areas to create more resilient robots, and make them more resilient to changes and environment updates. And so that’s really what our goal is with test suite in the long term.

And really, what we’re looking at is creating a world class continuous automation practice, for a fully automated enterprise. And that, like I mentioned earlier, is not only with the RPA team, but extending into the IT operations, development teams, enterprise teams, you know, including citizen developers, so that they’re all using the same platform, which allows you to centralize that governance, having a common approach allows you to create those resilient robots, which I’ve covered quite a bit. And then also covering that reuse piece that we talked about, you know, with the SpaceX example, so that people can reuse automation so that even the citizen developers in the organization can reuse these as well, these components as well. And all this allows you to scale faster by sharing the skills experience and the Automation Components across the enterprise. And then also, something else I didn’t mention earlier. But I really want to it’s an important point is by organizing your automations around the business processes, really aligns the organization around the processes that are running your business. So whereas testing in the in the past, you know, the application testing team may have been kind of siloed. And all they’re doing is looking at workday to see if it works. But now if they’re working with the RPA team, now they can kind of look at it more holistically in the context of how the business uses it. So that they can really be testing it, you know, doing more risk based testing based on the actual business processes running the business. And so that’s another important point.

And with that, we have a lot of resources to look at, if you want to go to our landing page on UiPath comm you can kind of get the idea of what test suite is and what it’s about. We also have free training on Academy. So you can go out into Academy and get trained on Studio Pro and test suite for free. And then we have probably about I think we have about 30 videos now on YouTube, that Evan touched on a little bit of what test suite does, but there’s so much more than it does, you know, SAP testing, Citrix testing, you know, getting a deeper dive on the RPA testing, we have several videos out there. So go out there and really explore that and kind of get to know the capabilities a little bit more. And with that, I think, turn it back over.

Question and Answer 

Any questions from some of our attendees here? Okay, so maybe a couple more questions. So I think we saw that there’s, there’s definitely a reusability component to this. And maybe a little bit deeper into kind of the technical side of things is, does the test the test cases that we create within Studio Pro? Do those travel with the zamel? file?

Yeah, absolutely. So you know, just because it’s even code written in Studio Pro, right? Doesn’t mean, it couldn’t be code you might run as part of the process, you know. So I think a lot of customers we’ve seen being successful with this really are double dipping, right, both on the test side and the production side. Because you can either, you know, take elements of your production automations, and then use those as a starting point for tests or even you know, as you’re saying, kind of go the other way, right, take some tests, and then say, hey, actually, you know, these components right here can be used on the production side. And that’s going to cut a month off our development for you know, some that’s going to bring a lot of value.

And another important point on that is the, you know, as you mentioned, that so you, when you create your test cases, as you’re building your workflows, the thing that we sometimes don’t emphasize enough is that the those do travel to orchestrator. And you can set up test sets, and like Kevin talked about earlier, you can schedule those to run like on a daily basis, or you can schedule those to be triggered, if something underlying changes that that is going to impact that, that robot it can run, you can automatically run your test cases for that workflow. So like I mentioned earlier, so you get those headlights of things that could potentially be broken. Before the when the when the next version of an application or an environment change comes out?

That’s, that’s definitely a big component there. So maybe, maybe a final question. So we’re talking about the kind of the RPA suite here. And now it kind of crosses some of the silos of different capabilities. Is there a longer a longer vision to test suite maybe expanded beyond just kind of RPA?

So yeah, we are looking at kind of kind of looking at ahead, we’re thinking about not only just the testing piece of it, but as you mentioned earlier, there’s a lot of governance pieces that we think are going to be needed, the RPM, you know, development market is, you know, maturing pretty rapidly. And I think some customers are doing it, but it’s not consistently being done. And so we’re looking at doing in the next year or so is, is coming out with something called automation ops. And so it’s kind of the the cousin of DevOps. And so we’re looking at how those two can kind of coordinate together to really build this, you know, kind of digital pipeline that crosses over not just development operations, but also in the business and making sure that that that whole pipeline and governance piece is  all coordinated.

Thank you, everybody.

This blog post includes the full Transcript of Automating Testing in UiPath Test Suite from Ashling Partners Webinar on January 7, 2021 and the YouTube video of the entire educational webinar: UiPath – Leveraging Test Suite in Your Automation Program and Beyond.

Begin your intelligent automation journey today

Our team is ready to guide the way.