DOI : 10.17577/IJERTCONV14IS020023- Open Access

- Authors : Priya Aher
- Paper ID : IJERTCONV14IS020023
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
A Study On Prompt Engineering: Techniques and Performance Enhancement In AI System
Priya Aher
Department of Computer Science,
Maeers MIT Arts,Commerce & Science College, Alandi, Pune, Maharashtra ,India
Abstract – The rapid growth of Artificial Intelligence (AI) tools such as ChatGPT has significantly transformed the way users interact with intelligent systems. However, the quality of AI-generated responses depends heavily on how input instructions, known as prompts, are designed. Poorly structured prompts often result in incomplete, unclear, or inaccurate outputs. Prompt Engineering has emerged as a practical and effective approach to improving the performance of AI systems by designing precise, structured, and goal-oriented prompts.
This study examines five prompt engineering techniques: Zero-shot, One-shot, Few-shot, Instruction-based, and Chain-of- Thought prompting. These techniques were evaluated across 20 academic tasks including conceptual questions, mathematical reasoning problems, programming exercises, and analytical writing tasks. AI-generated responses were assessed using four performance parameters: accuracy, logical reasoning, clarity, and structural organization.
The results demonstrate that structured prompting techniques significantly enhance AI output quality. Few-shot and Chain-of-Thought prompting achieved the highest performance, particularly for complex and multi-step reasoning tasks. This research highlights the importance of effective prompt design in optimizing AI systems and provides practical insights for students, researchers, and professionals using AI tools.
Keywords: Prompt Engineering, Artificial Intelligence, Few- shot Learning, Chain-of-Thought, Large Language Models
I. INTRODUCTION
Artificial Intelligence has become an integral component of modern education, industry, and research. Large Language Models (LLMs) such as ChatGPT are increasingly used for content generation, problem-solving, programming assistance, and analytical reasoning. Despite their advanced architecture, the effectiveness of these models is highly dependent on the quality of user input. AI systems operate strictly based on provided instructions. Vague or poorly designed prompts can lead to ambiguous or incorrect responses, even from highly capable models. This limitation has led to the emergence of Prompt Engineering, a discipline focused on designing structured prompts to guide AI systems toward accurate and meaningful outputs. The objective of this research is to analyze and
compare different prompt engineering techniques and evaluate their impact on AI performance across academic tasks.
II .BACKGROUND AND LITERATURE REVIEW
Large Language Models are trained on extensive datasets using deep learning techniques, enabling them to predict and generate human-like text. However, these models do not possess true understanding; instead, they rely on statistical patterns in language.
Brown et al. (2020) demonstrated that providing examples within prompts, known as Few-shot learning, significantly improves model performance. Wei et al. (2022) further introduced Chain-of-Thought prompting, which encourages models to generate intermediate reasoning steps, leading to better performance on complex reasoning tasks.
Existing literature confirms that structured prompting methods enhance contextual understanding and reasoning accuracy. This study extends prior work by experimentally comparing multiple prompting techniques using uniform evaluation criteria.
III .PROMPT ENGINEERING TECHNIQUES
FIVE PROMPT ENGINEERING TECHNIQUES WERE EVALUATED IN THIS STUDY: SAMPLE PROMPTS WERE DESIGNED CAREFULLY TO OBSERVE HOW CHANGES IN PROMPT STRUCTURE IMPACT AI-GENERATED OUTPUTS.
-
Zero-shot Prompting
In Zero-shot prompting, the model is given a task without any examples.
Example: Explain Artificial Intelligence.
This approach relies entirely on the models prior knowledge and
often results in generic or shallow responses.
-
One-shot Prompting
In One-shot prompting, a single example is provided before the actual task.
Prompt Example:
"Example: Artificial Intelligence is the simulation of human intelligence in machines. Now explain Machine Learning."
Providing one example slightly improves context understanding and response relevance.
-
Few-shot Prompting
Few-shot prompting includes multiple examples to guide the model toward the desired output structure.
PromptExample:
"Example 1: Artificial Intelligence refers to machines that mimic human intelligence.
Example 2: Machine Learning is a subset of AI that learns from data. Now explain Deep Learning."
This technique significantly improves structure, consistency, and accuracy.
-
suc
"Ex and
org
-
by-
"Ex com
effe
resu
ing
inal X-
ents the
the hot gle as and es ting ng,
that
/ 4 nce,
con
guidance.
i mpt
ero- shot mal
Instruction-based Prompting
Few-shot
8.1
Instruction-based prompting clea
h as length, format, and style.
rly defines task requirements
Instruction-based
7.7
Prompt Example:
Chain-of-Thought
8.8
plain Artificial Intelligence in 150 one real-world example."
Explicit instructions reduce ambi anization.
Chain-of-Thought Prompting
Chain-of-Thought prompting instr step before giving the final answer.
Prompt Example:
plain step-by-step what Artific ponents, and then provide a final d
This technique enhances logical ctive for complex and multi-step pr
IV. METHOD
A dataset of 20 academic tasks was
Each task was executed using a lting in 100 AI-generated response
Responses were evaluated based o
Final Score = (Accuracy + Rea
Uniform evaluation standards wer sistency.
V . EXPERIMENT
5.1 Performance Scores
words using simple language
5.2 Graphical Representation
guity and improve clarity and
ucts the model to reason step-
ial Intelligence is, list its efinition."
reasoning and is particularly oblems.
OLOGY
designed for evaluation:
s oblems
ll five prompting techniques, Fig. Performance comparison
Fig. 1 shows a simple bar chart scores obtained by different prompt e axis represents the prompting techniqu
n four parameters: the final performance score (out of 10 average score achieved across all acade
The graph clearly indicates that Ze lowest performance due to the ab prompting shows a moderate impro example. Few-shot prompting achi multiple examples help the mod expectations. Instruction-based pr
ale of 0 to 10. clarity by explicitly defining constraint achieves the highest score, as it enco resulting in more accurate and logically
soning + Clarity + Structure) The simple graphical represent
increased prompt structure leads t
validating the core obective of this res
e applied to ensure fairness and
AL RESULTS 5.3 Statistical Analysis
of prompt engineer
representing the average f ngineering techniques. The es, while the Y-axis repres
). Each bar corresponds to mic tasks.
ro-shot prompting produces sence of guidance. One-s vement by providing a sin eves higher performance el understand patterns ompting further improv
s. Chain-of-Thought promp urages step-by-step reasoni structured responses.
ation makes it evident o improved AI performa earch.
Prompting Technique
Final Score (Out of 10)
VI. DISCU
SSION
Zero-shot
5.5 The experimental results clearly engineering techniques significantly e
ndicate that structured pro nhance AI performance. Z
One-shot
6.5 shot prompting produced basic respons
prompting offered moderate improv
es with limited depth. One-
ements by providing mini
-
5 conceptual theory question
-
5 mathematical reasoning pr
-
5 programming-related tasks
-
5 analytical writing tasks
-
Evaluation Criteria
-
Accuracy
-
Logical Reasoning
-
Clarity
-
Structural Organization Each parameter was scored on a sc Final Score Formula:
-
-
techniques
-
Mean Score = 7.32
-
-
Standard Deviation 1.25
Few-shot prompting improved consistency and structural organization by demonstrating expected output patterns Instruction- based prompting enhanced clarity and formatting. Chain-of-Thought prompting achieved the highest scores by improving reasoning transparency and logical flow
The study observed approximately a 60% improvement in performance from Zero-shot to Chain-of-Thought prompting, emphasizing the critical role of prompt design.
VII. APPLICATIONS
Prompt engineering techniques can be effectively applied in:
-
Education (concept explanation, assignment assistance)
-
Software development (code generation and debugging)
-
Research and academic writing
-
Data analysis and reporting
-
Business communication
-
Effective prompting improves precision, reduces ambiguity, and enhances productivity.
IX. CONCLUSION
This research demonstrates that prompt engineering is a critical factor in optimizing AI system performance. Structured prompting techniques, particularly Few-shot and Chain-of-Thought prompting, significantly improve accuracy, reasoning depth, clarity, and consistency.
The findings confirm that AI effectiveness depends not only on model sophistication but also on the clarity and structure of human instructions. As AI adoption continues to grow, prompt engineering will become an essential skill for effective humanAI interaction.
Future research may explore adaptive prompting systems and automated prompt optimization techniques.
.X. REFERENCES
-
Brown, T. et al., Language Models are Few-Shot Learners, NeurIPS, 2020.
-
Wei, J. et al., Chain-of-Thought Prompting Improves Reasoning in
Large Language Models, 2022.
-
OpenAI, Prompt Engineering Guide, OpenAI Documentation
VIII . LIMITATIONS
This study was conducted using a limited dataset of 20 tasks. Larger datasets and automated evaluation metrics could provide stronger statistical validation. Additionally, AI performance may vary across different models and system versions.
