Allow me to share a new internal GenAI use case that one of our team developed. Super cool. It ‘captures’ all our learnings throughout the week and shares them with the team once a week.

Background first

One of our values as a company is learning. Like all values, it’s a pretty obvious ideal, but so are the other 150 we could have chosen but didn’t. They all matter, but learning is one of only five we choose to prioritise. So, as you’d expect, we do a lot to live that particular value. And that’s a reason we seem to be able to attract, grow, and retain a team of the calibre that we do. We really invest in learning.

I have been frustrated recently that our commitment to learning manifests as an aggregate of many discrete initiatives. Don’t get me wrong: each is incredible. Our philosophical adherence to the buyer’s journey is consistent. And the learning devices connect in other ways, but they are still discrete. I wanted our learning to be a ‘system’.

A quick example: We have a ‘toe stub’ register (thanks Ray Dalio). If I did something today and it didn’t work well, I am asked to stop, feel the pain (so it motivates me to act), note the toe stub, reflect on what it taught me, ID actions that I or we can take to improve, bake them in somehow, move on. We genuinely celebrate these micro-failures in the same way that a good session in the gym builds muscles by causing micro-tears, which then heal as new muscle.

Another example that is somewhat more formalised: we have a GTM workshop known to our direct clients as Funnel Camp, our SAP partners as Funnel Mastery, and our AWS partners as PTP. We do this often enough, and it’s big enough, that we have a formal process to capture the learnings and share with the team. After literally every delivery, those involved meet for 30 minutes and bring to that meeting their thoughts on what we did differently that might be worth preserving and what we need to improve for next time. And how. Then, once a month, a small team meets to review all of those insights and identify actions we can take to improve our delivery. We’ve done over 500 Funnel Camps in various guises, but we still learn lots every single time. The ‘product’ is still world-class 24 years after our first Funnel Camp because of that commitment to continuous improvement. That micro process to capture the learnings is one of the more mature sub-systems.

But it is still separate from the toe stub register, the process we use to debrief a training delivery, how we reflect on a new release of our application, or how we unpack the momentum created with a new campaign for any of our digital marketing clients. They are each discrete.

A learning system

So we created an initiative to build a ‘learning system’. It has a small team tasked with driving this in 2024. They built a reverse brief (a good habit for a marketing company) and reviewed it with me last week. I was really impressed and felt somewhat empty-handed when I came to the review meeting, as I didn’t have a lot to add. They’d done a great job. Really great. My only suggestion was that we needed some way to capture learnings as they occurred. It needed to be light, low friction for the person who is offering the learning, immediate, and not place a tax on someone else to order / note / categorise the learning.

I lacked a visual metaphor, so I was just describing this as a “blob”, a “thing” in the corner where any of us could ‘throw’ our idea when it occurred, and we’d deal with the learning later. At some stage in the discussion, I changed this to a “catcher’s mitt” – still not a perfect metaphor but good enough.

Early versions of a catcher’s mitt

Logically enough, the leader of this initiative made a simple spreadsheet. “Don’t worry about the tech. That will come later. Just get the ideas right” had been one of my asks of the team.

The spreadsheet asked the right basic questions like:

  • What is the learning?
  • What domain (planning? training? acquisition of onboarding of staff? staff growth? fun? partner acquisition / onboarding? one of the digital marketing tactics we employ for customers?)?

And some basic meta data like:

  • From whom?
  • When?
  • Who will this affect?

Remember, this was not (yet) about improving, just capturing the insight when it occurred. There were more columns, but I am just giving you the idea here. A simple enough spreadsheet with logical enough columns. Good.

But this was still a bit clunky. We were asking the broader team to bookmark yet another spreadsheet and to add the learning plus the meta data.

So they made a cool automation where I could just put all of that into an email, and the data would find its way out of the email into the spreadsheet. As long as I structured the email correctly, the automation would strip out the data and put it in the right spot.

Better, but still clunky. I needed to save an email template with the formatting or my idea would get all jumbled.

An AI and an automation

And then the magic happened.

Now, I send an email to the catcher’s mitt (that’s not its name). Any format. Any email length. Any chain depth. Any topic (toe stubs and Funnel Plan improvements are two of maybe 40 we need).

Our AI captures the learning, consolidates it, categorises it, and records it, including all of the metadata.

Then, once a week, it pulls out all of the learnings and sends an email to our team summarising the key learnings for the week.

It just sits in the corner ready to catch our learnings and share them once a week. I love this as a technological answer to a well-considered use case. I am super proud of the team leading this initiative and the creative approach they took to improving how we learn.

But that’s not why I wrote this article.

Use case first

For the last six months, we have been working with our friends at AWS to build, then pilot, and now roll out globally a new workshop to help AWS’s partners find their best GenAI use cases for their customers. First, we work through 24 foundational capabilities and identify actions to improve the eight of those 24 that we agree are most important for that partner. Then we workshop a crazy-long list of use cases which we run through seven commercial lenses to find the ones that will most move the needle for that partner. Then, we build a plan to ideate, demonstrate, prove, and then scale that use case.

It has been a steep learning curve, but now we’re ready to accelerate. We’ve added eight super-experienced, heavy-hitting team members to give us geographical coverage in North America, Europe, Japan, ASEAN and India, the Middle East, and our home base in ANZ. VP-level tech leaders with loads of sales and marketing expertise and sharp thinking around AI use cases. That also gives us native language coverage for English, Hindi, Punjabi, Arabic, and Japanese, fluency in Portuguese, and good enough for delivery in French and Spanish.

I’m excited to work with this team and impressed by their depth of experience.

But that’s not my point either.

Like our own fun development of a “catcher’s mitt for learning” where the need was clear and then the technology was found, the development of this workshop began with a clear use case. The shape of the workshops and the tech and team needed to deliver them came second.

And it is use cases that we develop in the workshops.

The how comes after the what, and the what comes after the why. It kind of sounds like a buyer’s journey, don’t you think?

In your company’s AI journey, spend enough time in the sandpit just playing. Learn from the playing. Keep it going. But also carve out the time to think through the use cases that will move the needle. Then, find the tech.

This article was first posted here on LinkedIn.