January 29, 2024

00:14:31

Lessons Learned - E74

Lessons Learned - E74
What Counts?
Lessons Learned - E74

Jan 29 2024 | 00:14:31

/

Show Notes

You figured out whether to do a pilot project or a proof of concept, now document your lessons learned. How many hypotheses are too many to test for with a pilot project?  What do you do with the lessons learned that you collected? Join Information Governance Consultants, Maura Dunn and Lee Karas, as they give real life examples to help document and follow up on lessons learned. Each episode contains important information to help guide businesspeople with project implementation.
View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Documenting what you learned from your pilot or proof of concept. Hello. Thank you for joining us. This is what counts. A podcast created by Trailblazer Consulting. Here we highlight proven solutions developed through our experience working with companies across various industries. And we talk about how you can apply these solutions to your company. We share our experience solving information management challenges like creating and implementing a records retention schedule, creating an asset data hierarchy, or helping with email management. This is Lee. And in this episode, or. And I will discuss the mechanics of what you do after your pilot or proof of concept. Yeah, you with me? [00:00:40] Speaker B: I'm with you. Not just after, though. It's through the whole process. So we talked in a recent episode, we talked about people use the term pilot or proof of concept. You might even hear soft launch as sort of phase one of an implementation project. And we try with our clients to put some more structure around those terms. So proof of concept is, to me, the most important one because as we defined it in our last episode, but just as a reminder, the proof of concept is really, hey, we're making a big change. We're anticipating a big change in system or process or both. And we have some questions about how it's going to work. And so we have a plan and we want to test that in a very confined way. And at the end of the test, we will do a go no go decision. And it could be no, like, hey, this didn't work like we thought it would, and let's make a different decision. Pilot, on the other hand, is more like we're pretty sure about where we're going. We have the plan on how to move forward. We've got pieces together and we want to test everything out on a small group before we go to a bigger group. But it's pretty unlikely that at the end of the pilot session, we're going to decide to stop. [00:02:01] Speaker A: Yeah, there may be some tweaks along the way just to get it right for that pilot group, right? [00:02:07] Speaker B: Pretty much. And use things from the pilot to tweak the bigger plan. But we're not really planning that our path is going to, we're not going to make a hard right turn at the end of it or stop or pause or anything. Soft launch is even squishier than that, which is, we call it a pilot, but really it's just the first group we roll out to and we're not going to make any changes in our rollout philosophy. Okay, so we said all of that, and we've had some good examples about what we have done in the past that were pilots versus proof of concept and how they worked out and what we learned from them. But as I was thinking back to that episode, I thought it might be helpful if we walk through the steps for how you do this. And it's the same steps whether it's a pilot or a proof of concept, but the artifacts or the outcomes are different. So first step is, what is it? Are we doing a proof of concept? Are we doing a pilot? And the question you need to ask yourself there is how confident do we feel in our plan? If we have we 99% confident this is the right plan and we just want to test a couple of things and maybe make some tweaks. That's a pilot. If we are feeling okay about our plan, we've put a lot of thought into it, but actually there's some questions we just can't answer until we get into it. That's more of a proof of concept, and I think. So the first step is make that decision and make sure all your key stakeholders understand that this time we're doing a proof of concept because we think this is the way to go. Here's all the work we've done to get to this point, but we have these questions outstanding, and we think the best way to try and answer them is do a proof of concept and then take a hard look at what we find. So that's step one. Step two is actually write down what are we testing. So what are those questions? Whether they are big or small, whether you're in a proof of concept, big questions to answer or you're in a pilot, small questions to answer that might result in some tweaks, write them down. And I would say if you are coming up with a list that has more than three things that you're testing, big or small, you're not ready for either a pilot or a proof of concept. You need to go back to requirements and design because this is supposed to be a confined, limited amount of time, a limited group of people involved, and you want your pilot or your proof of concept. It can't just drag on for years while you try and figure out what you're doing next. That's not what we're talking about here. So up to three big questions or small questions that you're trying to answer with this activity. Write them down and write down. How are you going to measure the answers? So let's go back to our example about email management that we talked about in an earlier episode. We had some questions, we had a plan. There was a strong need that. Everyone in the leadership agreed we should start rolling email off for this company. They didn't want email to keep building up. That was the vision for the project. But there were some questions on what's the best way to do it. It was like, just do it time based, just get rid of stuff. And legal was like, hold up, you can't just do that because some of these things are records and we need to keep them. Especially because that company had done an assessment and we knew that a lot of their business processes were supported only in email. They did a lot of back and forth, a lot of approvals, a lot of decision making in email. And people counted on being able to find those old emails. All right, we're faced with time based versus wait. We need records. How do you help people to decide what is a record? And we looked at different options. We looked at by level by position, kind of following in the capstone footsteps from the federal government. We looked at everybody making decisions along the way on their own, and we decided on everybody needed to make their own decisions. But in our proof of concept, our first plan was we want people to actually follow the records retention schedule. So we want them to think about what record category would this Email go into? Not just is it a record, but which kind of record? And so in our proof of concept stage, we built a file structure in the email management system where the participants would have to pick the right folders to put their email records in. And our reason for that was good because we wanted to follow the retention schedule. These are records. We wanted to be able to apply retention in the folders where people filed the emails. However, the proof of concept showed us nobody could do that. They had a lot of feedback for us on how it was too hard to tell the difference between folder a and folder b, how when it got to be too hard, they just stopped, or they put everything in folder a because it came up first in the list, or they're like, maybe it's not a record after all. So our concept of, hey, we want to get things filed appropriately right at the beginning so that we can apply retention and ultimately also security from an information classification perspective, which is all tied to the retention schedule. We had good reasons for that. It's better to do that upfront when the people who created the email and know the most about it are closest to it in time make those decisions. But practically speaking, that was a failure on arrival. That was not going to happen. [00:08:07] Speaker A: It almost sounds like you're testing a hypothesis. Will a folder structure be easy for people to understand and file their emails into something like that. [00:08:17] Speaker B: Yeah. No, actually that's a perfect way to look at it. That is what we're doing. The goal of the pilot or the proof of concept is testing hypotheses to see if your plan is going to work and you want to test those hypotheses in a lower risk way where you've got a small group of people in a short amount of time, not whole hog. We've moved everybody to the new system and guess what? They hate it and nobody's using it and they're going back to spreadsheets because they don't care. That definition step. First, is it a pilot or a proof of concept? Second, what are we testing? What are these questions we're trying to answer? What are these hypotheses we're testing and how are we going to measure them? So in our email example, our primary measurement was the feedback we got from the proof of concept participants. Our secondary measurement was quantitative as opposed to qualitative, which is we looked at the folders to see did people file things? And we saw the patterns of groups that had three or more categories to look at or something like that. They couldn't do it. It was too hard, too many categories to pick from. If a group only had one category, they were more likely to file things. And the things that they filed did meet the definition of a record and we did training to help them go through this proof of concept and know how to answer the questions. So our measurements were both qualitative and quantitative. And then we got to step three, which is, okay, we got the feedback, we did the counts. Now what? And that's an analysis kind of of your findings. What are the lessons learned here? What did this proof of concept tell us about our bigger plan for rolling out email management? Well, it told us that people will not try to figure out record categories. The second thing that it told us was not something we'd anticipated, but we had a lot of people in the feedback session say, I want to make up my own folders. I want to make up my own folders in the folder that you asked me to put things in. And we didn't care about that. We hadn't seen it as important at the beginning, but we also didn't mind that people did it. So we took all of those findings, the kind of scattered and inconsistent rate of people filing things at all, plus the verbal feedback of it's too hard to choose among categories. And the bonus feedback of I want to make my own folders, and we changed our implementation plan for the bigger rollout to say, we're going to big bucket, even more than the retention schedule for this company was already a medium sized bucket based on business process and function, but we made it a bigger bucket pretty much based on department, and put everybody into the folders that we assigned them, each to the same folder that matched their department. We met with all the vice presidents, said, here are all the staff that are going to end up in this folder who will be putting information into the same place, sharing access to the information that's in there. We got some feedback from the vice presidents at that point saying, hey, it's too many people. We need a smaller group. So a more confidential group for senior managers or something. We created those and we added to our training. How do you set up a folder in your shared environment so that we could give people that flexibility and that we would know what they were doing. So that worked out well. We then ran through dozens of training classes and dozens of small group meetings and gave people feedback on how to do, on what they were doing and answered all their questions and started rolling that out. So going through the disciplined process of we're doing a proof of concept, which means we're going to make big decisions at the end of this. It's not just an exercise because we have to. What we were testing out about record category, record categorization and filing, how were we going to measure it, and what did we do with the feedback? That discipline helps you, helps in every case I've seen. When you don't have that pilot projects tend to go on for a long time and you don't get anywhere. What do you think about that? [00:12:50] Speaker A: No, I think you're absolutely right and that the email project is a great example in many cases, especially this. But I think what I heard you say is, what is it? What are you doing, a proof of concept, or a pilot for that matter, to kind of decide right up front, then write down your hypothesis or hypotheses so that you can prove out one or the other, do the analysis, figure out what you learned from that, make some changes, and then change your implementation plan that fits your needs. Kind of what I heard. It's logical. [00:13:27] Speaker B: I think so. And it works. We've seen it work multiple times. I've also seen pilots that everybody said it was a pilot, but really all that happened was it stood up some software and three people used it, and then it was there, but nobody ever went back to it. And that wasn't a pilot, and it wasn't a successful implementation. So I think it's important to be disciplined about how you use these terms and how you carry out these steps. It leads to better overall project success. [00:14:00] Speaker A: Makes total sense. [00:14:02] Speaker B: All right. [00:14:04] Speaker A: If you have any questions, please send us an email at info at trailblazer us.com or look us up on the web at WW trailblazer us. Thank you for listening and please tune into our next episode. Also, if you like this episode, please be a champion and share it with people in your social media network. As always, we appreciate you, the listeners. Special thanks goes to Jason Blake, who created our music. [00:14:29] Speaker B: Thanks, everyone. Bye.

Other Episodes