I'm here with Greg Collin, author of Agile Excellence for Product Managers and 20 year product management veteran. Thanks for joining us, Greg. >> I'm excited to be here, thank you, Alex. >> Slack is a big topic with agile teams, and it sometimes difficult area for the product management teams. And that their folks and the engineering folks to get good concordance, what do you see working well in terms of having useful discussions around slack and also getting to the right amount of slack in your agile cycles? >> So for me, I think this is where I side with the XP camp, that one should be doing continuous refactoring. >> Hm-mm. >> You need to just fill that in and that's a cost of doing businesss, because the most costly thing for the product and really destructive is if you have to set out a full long release cycle to refactor your code and keep that technical architecture up to date. >> Got it. Got it. >> And work in progress is another big related topic in agile. I like, trello the way to make that visible, but of course there's a lot more to it than that. What do you see working well in terms of making sure not everything stacks up in the QA at the same point for instance, or just generally balancing work in progress? >> Yes. I think this is where there's many tools, and you can do it physically with index cards. But that idea that you have a task board or a conbon board, and you see where work is. And you set limits for how much work in progress you are going to allow in your process, and then it becomes an act of balancing how much you're working on at one time versus how long does it take you to go from start to finish. So, we can add more work into the process, but our cycle time is going to increase, how long does it take to go from start to finish, and find out what is the right responsiveness for our business and our company that we need to maintain. >> Got it, and let's talk about burn down for a second let's touch on that. Do you find do some teams find that it's better to just look at actual. Some teams find it useful to keep an eye on burn down during their cycles? What do you find working well there, and what are the determinants of whether burn down is a good idea for a given team or cycle? >> Right, yes. I think whether it's, let's say, burn down, burn up, the idea that we are tracking the team's philosophy, we are getting a real measurement of their throughput, is absolutely essential from the planning side. That's what allows me as a product manager to know, are we on track to hit our deadline, and I always know, too, exactly what I am going to pull out of this product if I need to, to break us in to hit a date. And this is also where the estimating helps. You can count just pure cards and stories if you decompose them small enough, but you need some level of estimate to measure that Through put from the team, and then put that into a set of burn down or a burn up so we know how we are progressing against a plan. >> Let's talk a little bit about Agile and QA. What are some of the tensions you see between how all that should ideally work, between testing, dev, product management, and product owner, and how you see things working in practice a lot of the time? >> Right, yeah, so I think on the QA question I have found that to be very contextual amongst teams. I have seen teamed with one QA person. And they're really, that person's role was to test the product once it was completed on all different platforms. Although, I don't generally recommend the product owner, although they can do acceptance tests, they shouldn't be doing real QA. With one team, I did play that role, but that QA burden was probably about 40 minutes every two weeks, because it was a fully automated, all the testing had been automated in that solution. It was a low burden, and once again, we didn't have a dedicated QA person. But many other teams, because of the scope and the complexity, do need a dedicated QA People, and they deliver a huge amount of value. And you need people then that into that discussion when you're talking about the stories, and you're setting acceptance criteria. So, they know how to test it, and then they can also think about all the boundary cases and do the integration testing and stuff that is out of scope for pure just acceptance testing. >> And what do you see as the role of stories and narrative in QA? >> Well just like the developer, the QA person has to also have the shared understanding of what the business user or customer wants to achieve. So, if the story helps inform them, that lets them ask the right questions and create the right tests, as well as also understand how the user might use the system in a boundary scenario so that that gets documented and taken care of. >> There's kind of a running joke that well, if a team does daily stand ups, that means they think they're doing Agile. But of course, there's a lot more to it than that. And, obviously there's a big distance between completing this ritual of the daily stand up and really having a self-organizing team. What do you think is the role of the daily stand up, and how does it contribute to this goal of self-organization? >> [LAUGH] I have seen teams that say they're agile, and they are only doing a daily standup, I've had that experience. So, I think that the role of the stand up is a quick synchronization for the team. It's not where you problem solve, that gets taken off line, but it's to make sure everyone knows what everyone else is doing and everyone knows if there are any impediments the team is facing. But I sort of, when I think about these teams, though, that they do the stand up and they say they're agile, still, there's a lot of times where agile and adhoc get confused, and those teams are really practicing what I'll call adhoc. Agile takes a lot of discipline, and there's a lot of practices to work in there, even if you want to get to the XP side, development practices, continuous integration and unit testing, continuous refactoring. All these things go into creating a well functioning, disciplined Agile team and it extends well beyond the daily stand up. >> Got it. And what do you think about the role of retrospectives and post mortems. How often should teams do them? What should they, what are the success criteria for if you're working with a team, helping them to get to a good place on those? >> So in my mind, I mentally distinguish between post mortems, which is something that I did when I was using traditional serial development methods, and retrospectives that I've used in Agile. And what I saw happen in the post mortem is, our release cycle might be once every nine months. So once every nine months, we as a team sat down and, mostly, we vented from that venting period. Yeah. We'd take a few things that we felt we could improve. And we'd try and incorporate that into the next release cycle. But in that case, we didn't get another data point of whether these changes were helping for another nine months. >> Yeah. >> And that's not frequent enough. For me, the retrospective is that, once a week, once every two weeks, you're coming together, and you're saying what can I change about this process, and then you're getting another data point pretty quickly to say ok this change worked, this changed didn't work. And if it didn't work, should we abandon it because the idea isn't going to work, or did we just not implement it correctly and change it a new way and stick with it. >> Let's talk about getting started with Agile. Are there any common patterns that you see out there, how do you work with a client who wants to get started or reboot an Agile practice? >> I used to believe that you had to start with one small team and show success there, and that you would cherry pick that team too, to people who were very receptive to the idea of wanting to work in a new way and were excited about that. But then I learned salesforest.com, they did a whole sale Agile change at their company. They said everyone is going to change to this new process, and from what I heard from insiders there, It took 18 months to get proficient data. It was very painful, but they did achieve it and they got the business objectives that they were trying to achieve, which was increase both how much they were releasing, so the amount of value they could create in a period, and get to a quarterly cadence of release cycle where they were pushing out into the year mark. I said it's contextual. I certainly believe still it's easier to start with one small team see how it works in your environment. And then when they have success, roll out those best practices but you don't have to it that way. >> What about roles and people? Is there any patterns you see about It being introduced in a certain way by a certain type of person that create more or less success or anything that's relevant in that area. >> Right, I think clearly, I mean Agile came out of the world of development, it impacts the engineering department and the developers and managers in that department more than any other group in the company. I mean other groups that interact with engineering that it impacts and who are dependent on the release cycle and what happens, but ultimately that is the group that really needs to own it, and therefore having a leadership on board about that change I think is the single most important thing to getting acceptance within the organization. It's very hard to drive that change, if engineering management isn't on board. >> Got it, well that's some great advice on making Agile work in the real world. Thanks so much Greg.