Summary
If you prefer to listen to this blog post in a podcast form than read, then click the play button below. The audio here is an AI-generated podcast version of this blog post.
About Us
👋 Hey there! Somya and Henna here - former consultants from Bain and Accenture with 12+ years each in the industry. Somya has worked at high-growth companies such as TikTok, while Henna brings 6 years of entrepreneurial experience building startups. Having spent the last year experimenting with Generative AI, we're here to share our first-hand insights. We're passionate about strategy, workplace culture, and of course, AI.
Introduction
Note: Throughout this post, when we say GPT, we're referring to all major LLM chatbots like ChatGPT, Claude, and Gemini.
Personally, we have become increasingly reliant on GPT, Claude and Gemini to do everything from designing product features, creating mock ups to generating content. On the other hand, we see the majority of non-tech professionals in our circle struggling to extract value from Generative AI and using only a fraction of its capabilities.
Most people find that GPT’s output is often generic, shallow or simply wrong. And then they go, “I was scared from all the news saying AI is coming for my job. So, I tried it. Well, it can’t even draft a half-decent email. There’s time before it takes away my job. 😆”
The truth is, while AI may not be coming for your job, a similar qualified person as you who additionally knows how to leverage AI well is almost certainly coming for your job.
So it might just be a good time to start learning how you can make these AI tools work for you. In this post, we’ll look at:
Why does GPT not seem to get what I want?
How can I get first-time-right answers from GPT?
What can GPT realistically do for me?
Let’s dive in.
Why does GPT not seem to get what I want?
Granted GPT is not perfect but it is pretty powerful even in its current state, and improving very fast. And a large part of the reason why you may not be getting your desired output from GPT is:
Misplaced expectations: You are expecting it to excel at everything independently. Imagine asking an entry-level analyst to lead a billion-dollar merger
Incomplete instructions: You are not communicating in enough detail what you’d like it to do. Think GIGO. Garbage in, garbage out
Providing better instructions is all about good prompting. But before we get into the weeds, let's pause for a bit. Here's the mindset shift you need:
Think of GPT as a diligent, smart generalist in your team. With enough context and detailed instructions, they do specific tasks fairly well. While you still review the work, it's much faster than doing it yourself. And it frees your time to take up more strategic tasks. This is exactly how GPT can help you.
With that out the way, let's jump into how you can get first time right responses from GPT by writing better prompts.
How can I get first-time-right answers from GPT?
A quick note: This guide is relevant for complex tasks. For simple tasks such as "how to negotiate a salary raise" or "what is an LLM" – just ask GPT directly.
The basics of good prompting
Fun fact: The skills that make successful consultants - structured thinking and clear communication - are exactly what make great prompt engineers. Who knew? 😉
Using the below techniques will almost always provide you with value:
Assign AI a specific role
Provide context
Clearly define the task
Attach reference documents
Define the guardrails
To see this in action, let’s look at the below example for writing a prompt to create a marketing campaign:
Advanced prompting
Now that we've covered the fundamentals of prompting, let's explore some advanced prompting techniques. These methods require more setup but deliver higher quality output.
Structured output
You can make GPT’s output more predictable by forcing the output format. For example, when designing a marketing campaign, include:
“Provide your answer strictly in this format, in 2-3 bullet points for each topic:
- Campaign objective
- Product USP
- TG characteristics
- Campaign approach
- Timeline
- Next steps and owners”
Chain of Thought (CoT)
What is it?
With CoT prompting, the model breaks down a task into a sequence of logical steps to arrive at the answer. This approach is similar to how we learnt to solve an algebra problem in school.
A simple way to use CoT is without providing specific examples in the prompt and letting the model use its internal knowledge to figure out the steps. If you do have examples, you can add those in the prompt. The below is an example of CoT prompting in the context of Product Strategy:
"Let's devise the product strategy. Think step by step.
- First, identify the target customer segments.
- Then, examine their key pain points.
- Then, list competitor products and how they are positioned.
- Next, map our product features to these pain points.
- Next, finalize the 2-3 USPs of our product.
- Finally, list down how we will compete against others."
When to use it?
CoT helps in complex projects that require multi-step reasoning, such as competitive analysis, figuring out product positioning, etc. It brings in transparency into GPT’s output, making it much less of a “black box”.
When it won’t work?
First, CoT is not useful for tasks requiring the use of precise, factual external data. Second, CoT can lead to error propagation if the AI goes deep down an incorrect chain of thought. The workaround would be for you to critique the chain and provide feedback.
ReAct (Reasoning + Acting)
What is it?
With ReAct, GPT alternates between reasoning and taking actions. Since it can also use external sources as well (say Wikipedia API), it tends to be more factual and grounded than pure CoT.
An example prompt to create product positioning:
“Study this approach for ReAct: Research our competitor's pricing strategy:
Think: What information do we need?
Act: List key data points to gather
Think: How does this compare to industry standards?
Act: Create a comparison table
Think: What are the strategic implications?
Now use a similar approach to develop the product positioning for our new sunscreen. Leverage the attached product note to understand our TG, product features and USP.”
When to use it?
ReAct is your go-to approach when you need AI to both think AND fact-check itself, esp. by referencing external data sources.
When it won’t work?
As a thumb rule, if any prompting technique is adding unnecessary complexity, don’t use it. Also remember that it might not add value to use ReAct for tasks that don't benefit from breaking down into sub-steps.
Human-in-the-loop prompting
What is it?
For high-stakes tasks which require creative thinking and iteration, you can use “human in the loop”. Here, you assume the role of a reviewer and the AI presents intermediate output to you for validation. This allows for correcting errors mid-way and lets you control the major decisions.
For example, the below prompt for designing an onboarding journey incorporates human in the loop at the end:
"You are an expert UI/UX designer who needs to design the onboarding journey for my casual gaming mobile app.
- First, lay down the principles for creating a good journey.
- Next, think of the major stages in the journey.
- Then, expand on each stage.
- Make sure you cover happy paths, unhappy paths, error scenarios and major edge cases.
- At every step, pause and ask me if you're on the right track before moving to the next step.
- For key decision points, present 2-3 options".
When to use it?
Human in the loop is an excellent way to leverage human expertise to continuously guide the AI during the process and control the final output quality. We strongly recommend this for all strategic tasks that need deep thinking, such as writing a business case, creating your brand strategy or creating a product roadmap.
When it won't work?
This would be an overkill for routine tasks where the cost of errors is low, such as proofreading a document or answering simple questions.
Putting it all together
Finally, the real power comes from combining these techniques to suit your use case. For instance, you might use Chain of Thought with Human-in-the-Loop for a strategy project, or use basic prompting with structured output to generate consistent content for socials.
Creating a good prompt could involve significant writing and will be an iterative process. We foresee that over time, AI companies will build simpler UI and enough abstractions to make this process a lot more user-friendly rather than users having to type long prompts from scratch.
What can GPT realistically do for me?
Let’s look at the most common tasks a mid-level non-tech professional might need to do and explore where AI can add the most value. We have arranged them in a matrix to say which tasks you can leverage AI for in the current state of readiness, and which you can achieve with prompting alone (without writing code, accessing APIs or using a fine-tuned model).
What next?
As you experiment with GPT, you'll develop an intuition for which prompting techniques work best for different use cases. Here are some practical tips to start:
Start with simple prompting techniques. Layer in advanced techniques as you get comfortable
Create your own prompt library. Save and iterate on prompts that consistently deliver results for your top use cases
For ongoing projects, leverage Claude's “Projects” feature. Think of it as your AI workspace where you can upload all relevant documents once and reference them across multiple conversations without repeated uploads
As a pre-requisite, check your company’s data policies before sharing any confidential or proprietary information with GPT.
In the next post, we’ll talk about “Are AI applications just GPT wrappers?”. Stay tuned!