International Scientific Platform
Serving Researchers Since 2012

Techmaze: Interactive Coding Language Platform

DOI : 10.17577/IJERTCONV14IS010092
Download Full-Text PDF Cite this Publication

Text Only Version

Techmaze: Interactive Coding Language Platform

Mrs. Jayashree J Assistant Professor Department of MCA

AJ Institute of Engineering and Technology, Mangalore, India

Ms. Deeksha S Shetty

Department of MCA

AJ Institute of Engineering and Technology, Mangalore, India

Mr. Suhas S B

Department of MCA

AJ Institute of Engineering and Technology, Mangalore, India

Mr. Abhiram

Department of MCA

AJ Institute of Engineering and Technology,

Mangalore, India

Mr. Darshan Prabhakar

Department of MCA

AJ Institute of Engineering and Technology, Mangalore, India

Abstract – Land Use and Land Cover (LULC) changes are essential indicators of how human activities affect the Earths surface and environment. Rapid urbanization, agricultural Abstract Programming has become an essential part of modern education and is in high demand across all industries. To address this need, TECHMAZE, a web-based interactive programming platform, has been created to connect theoretical learning with practical coding skills. It is built for both beginners and experienced learners, offering a hands-on learning environment supported by modern web technologies. TECHMAZE allows users to learn programming through an intuitive real-time code editor, structured learning paths, and integrated resources. The platform currently supports Python, C, and C++ programming languages, making it suitable for various learning requirements. The development stack includes React, TypeScript, Supabase, and Tailwind CSS, ensuring a responsive and user-friendly interface. Key features such as progress tracking, browser-based access, and multi- language support make TECHMAZE easily accessible and engaging. These tools promote consistent learning and allow users to track their development over time. The platform is especially beneficial for students, self- learners, and educational institutions that want to provide effective coding education. This paper discusses the design and implementation of TECHMAZE, highlighting how its features improve the learning experience. By offering a smooth combination of theoretical content and practical application, TECHMAZE helps reduce the skill gap in todays digital and software-driven economy.

Keyword: Interactive programming learning, web-based platform, coding education, React, Supabase.

  1. INTRODUCTION

    With the rapid pace of the digital age, programming skill is more relevant than ever before, not only for computer professionals, but also for those in other fields. Nevertheless, the traditional approach to learning to programvia textbooks and individual coding practiceis generally not adequate in meeting the dynamic programming learning needs. Aware of this disparity, our our group created TECHMAZE, an interactive online web-based programming learning platform with the vision of revolutionizing coding practice and learning.

    TECHMAZE is designed to provide an end-to-end and practical learning experience to students, self-learners, and schools. It unites theory and practice since it allows the user to debug, test, and code in real-time within the browser. TECHMAZE offers an exhaustive list of features including user login, tracking of progress, learning maps for Python, C, and C++, selected educational content, and inbuilt code editor. All these features combine to provide an interactive, efficient, and personalized learning experience.

    The technology stack is constructed with the latest web development frameworks like React (frontend), Supabase (backend APIs and authentication), TypeScript (type safety), Tailwind CSS (responsiveness), and PostgreSQL (main database). The solid technology stack makes sure that TECHMAZE provides a scalable, secure, and seamless user experience across all devices. As opposed to the conventional

    learning sites that divide theory and practice, TECHMAZE is a firm believer in the learning by doing process. It enables users to trace given programming steps, from beginner to master levels, with direct application of ideas through interactive quizzes and tests. The live editor, combined with quizzes and the progress graph, enables a continuous feedback system necessary in the skill development process. Further, TECHMAZE addresses common learning-to-code problems including software installation, motivational problems, and uneven progress through browser-based access, gamification, and user-centric design. With its future- proof architecture, the platform can be upgraded to include additional languages, collaborative features, AI-based coding assistance, and even mobile apps.

    Fundamentally, TECHMAZE is not only a tutorial of website or a code editorit is an educational platform that inspires curiosity, builds confidence, and empowers users to be brilliant programmers. For use in schools or as a personal resource, TECHMAZE provides a real-world, reliable, and enjoyable means of learning programming.

  2. FORMATIVE STUDY

    Our formative study aims to identify the challenges students encounter while learning programming on online platforms like TECHMAZE. This is particularly important during the shift from theory to practical coding skills. Unlike standard online courses, TECHMAZE emphasizes interactive, code- focused learning that adjusts to each student's progress. This approach fosters a fast-paced, skills-driven, and feedback- oriented learning environment. It demands a system that responds to both learner needs and the complexities of programming.

      1. PROCEDURE

        We recruited ten undergraduate and postgraduate students (6 males and 4 females), aged 20 to 26 (M = 22.8, SD = 1.9). All participants had a computing background with varying levels of exposure to Python, C, and C++. Each student had basic programming knowledge, and most had experience with other platforms like HackerRank or LeetCode. Participants were chosen from a local universitys computer science department. Participants took part in a structured, hands-on session with the TECHMAZE prototype. The study took place in a controlled settingeither a computer lab or via a remote Zoom setup. Each session lasted around 30 to 40 minutes. Students were introduced to the platform and then asked to complete core learning tasks, which included:

        • Navigating through the Explore, Enhancement, and Challenge Hub sections.

        • Choosing a programming language and finishing at least one coding challenge.

        • Using the real-time code editor to write, run, and debug code.

        • Observing feedback mechanisms like progress tracking, points earned, and task completion status.

          All user actions were screen-recorded with consent, and we conducted post-task interviews to gather qualitative feedback. To ensure consistency, all participants followed the same sequence of tasks. Observers noted instances where students hesitated, asked questions, or struggled to complete a step smoothly.

      2. FINDING

        The study revealed several insights about how students interacted with the platform. These insights are organized into objectives, process, and content.

      3. OBJECTIVES

        Students valued platforms that provided learning tasks linked to specific objectives such as mastering syntax, developing logic, and implementing data structures. They preferred platforms that allowed them to filter challenges by skill level, which TECHMAZE supported.

        However, students also wanted clear learning paths focused on goals. For instance, one participant stated, I wanted a path that tells me what to learn next after this challenge. Although TECHMAZE showed topics like variables, loops, andfunctions, students felt the system could better highlight the concepts each challenge addressed.

        While students liked the option to switch between languages (Python, C, C++, Go), they expected the system to keep track of their progress and goals across each language, which was not entirely in place.

      4. PRCOCESS

        Programming platforms need to do more than just execute code; they also need to provide support when students get stuck, such as hints or breakdowns of problems. Several users expressed uncertainty when encountering errors, especially syntax or runtime errors, unsure if these were logical mistakes or system issues.

        Students preferred systems that offered inline feedback that guided them on the next steps instead of offering vague error messages. For example, when a C++ solution failed due to a missing header file, one participant gave up, saying, I dont know what its asking me to fix.

        Moreover, students appreciated having step-by-step breakdowns of challenges, but only a few tasks on TECHMAZE currently provided this. They suggested adding explanations before and after coding tasks, as this would help them build a mental model of how the logic should develop

      5. CONTENT

        The content on TECHMAZE was well-received, particularly due to its range of programming challenges and sorting by difficulty level. Most students completed Easy and Medium

        challenges within the given time frame. Students showed interest in:

        • Interactive hints for more challenging problems.

        • Sample outputs or example test cases, especially in Go and C challenges.

        • Explanations after submission, detailing what was done correctly or incorrectly.

        A key observation was that learners sought contextual contentthey wanted to understand why they were solving a problem and how it related to real-world logic. One participant mentioned, I wish there were more scenarios or problem descriptions that feel real.

        Additionally, although TECHMAZE currently offers textual feedback and challenge descriptions, students suggested future additions of video explanations, graphical visuals, and peer discussion forums.

      6. DESIGN

    Based on the feedback from this study, we established three main design goals for TECHMAZE to better support programming learners:

    DG1: Make learning pathways clear with specific objectives. The platform should explicitly show which concept a user is tackling in each challenge. For instance, a problem labeled loops should specify the type of loop, its purpose, and how it contributes to mastering control structures.

    DG2: Offer guided feedback and support for debugging. Real-time hints, syntax error detection, and logic walkthroughs should be accessible to reduce dropout rates. The system should assist students in overcoming confusion by providing reasoning for solutions, not just comparing outputs.

    DG3: Provide contextual and personalized content delivery. Students want relevant, practical challenges. The system should facilitate the creation of learning paths or modules based on real-life scenarios. Future versions could also integrate voice explanations, test case generators, or even coding assistants powered by AI.

  3. TECHMAZE

    TECHMAZE is an interactive online platform that makes learning programming easier and more engaging for students. As coding becomes a vital skill today, many learners find it hard to connect theory with practical coding. TECHMAZE addresses this issue by offering a hands-on, step-by-step approach to learning languages like Python, C, and C++. This platform suits school and college students, beginners, and even teachers who want to create an engaging classroom experience. The platform features a clean and user-friendly interface that lets users write, run, and test code in real time. Students do not need to install anything on their computers; everything happens online.

    TECHMAZE also provides well-designed learning paths that guide students from basic to advanced coding topics. These paths help students build strong foundations before tackling more complex concepts.

    One unique feature of TECHMAZE is its AI-powered guidance system. This system offers suggestions, hints, and help with debugging code. So, if a student gets stuck, the platform can provide support similar to what a teacher would. This makes learning easier and more motivating. Alongside learning paths and code practice, TECHMAZE offers creative coding challenges and projects. These challenges let students use what theyve learned in fun and engaging ways. For example, students can create games, animations, or mini applications, helping them become not only coders but also creative problem solvers. TECHMAZE believes in learning by doing, which is why hands-on activities are key to the platform. The platform also supports classroom integration. Teachers can create class groups, assign tasks, and track students' progress. This helps educators manage programming courses and give personalized feedback. TECHMAZE bridges the gap between teachers and students by offering tools that support both sides.

    TECHMAZE is built with students in mind. It works on multiple devices and is accessible at any time, from anywhere. This flexibility allows students to learn at their own pace and revisit lessons whenever needed. Gamified elements, such as badges and scoreboards, add excitement and motivation, encouraging students to keep practicing and improving.

    In summary, TECHMAZE is more than just a coding tool. It offers a complete learning solution for programming education. It combines real-time code execution, AI support, structured lessons, fun challenges, and teacher toolsall in one platform. By using TECHMAZE, students not only learn to code but also gain confidence, creativity, and strong computational thinking skills that are essential for the future.

      1. TECHMAZES USER INTERFACE

        The user interface of TECHMAZE is crafted to deliver a smooth and enjoyable coding experience for learners of all levels. The layout is clear and organized, so users can easily locate the tools and information they need. At the top, a navigation bar holds quick links to various programming languages such as Python, C, and C++, along with buttons for Login and Register. These options let users switch between languages or access their accounts with ease.

        In the centre of the screen, TECHMAZE includes a real- time code editor where users can type and edit their code. The editor supports syntax highlighting, automatic indentation, and error suggestions to help users write accurate and readable code. Below or beside the editor is the output terminal, which displays the program's result when the user clicks the Run button. This setup allows learners to test their code immediately and see how it works in real time.

        To assist beginners, the interface also features learning modules and coding challenges that show up as cards or tabs on the side. These modules provide brief explanations, examples, and tasks that users can complete step by step. An AI assistant panel is available to offer coding suggestions, logic support, and hints when users encounter difficulties.

        The colour scheme and icons are soft and visually appealing, promoting focus and minimizing eye strain. Whether using a desktop or a mobile device, TECHMAZE adjusts to different screen sizes, providing a smooth and responsive experience. In summary, the TECHMAZE interface blends simplicity, functionality, and interactivity to help users learn programming effectively.

        The TECHMAZE Home Page is the entry point of the platform. It shows the platform name and its tagline, "Code Learn Master," to highlight the site's purpose. The users can access major areas like "Explore" to view learning materials, "Enhancement" to solve and practiceproblems, "Login" and "Register" to log in, or "Admin" if they are an admin. It is a straightforward layout to navigate, and the users can easily get their bearings and find what they are looking for. The layout is minimal and concise, welcoming new and returning users and directing them towards their learning destination.

        1. The Explore Page is an exploration showcase where students are able to browse through the programming

          problems that are available in different programming languages such as Python, C, C++, and Go. Problems are searchable and categorized by difficulty levelsEasy, Medium, and Hardand filters are provided so students can choose the right level and programming language. The page is like a catalogue of learning, where students are able to get an immediate glimpse of each task, e.g., title, estimated time, point value, percentage complete, and tags such as "#loops" or "#variables." The idea is to make finding new problems easy and enjoyable as well as challenging students to move out of their comfort zone.

        2. The Enhancement Page is the core practice space. It has a scrollable, extended list of all the challenges in a selected language. Each challenge is represented by a compact card with the task name, level of difficulty, description, estimated time, and point value. If a user clicks "Start Challenge," they are taken to a code window (the Challenge Page) where they can enter and execute their code. Enhancement is designed to be a learning sandboxeach challenge builds on programming knowledge a little at a time and allows learners to build up through practice and experience. It also shows progress statistics like total questions, challenges solved,

          Fig1 : user interface in the (A) explore In the Explore module, users start at A1 by choosing a programming language. They can also search for one using A2 or pick from a list in A3 (Python, C, C++, Go). After making a selection, they move to A4, a simple code editor where they can write, run, reset, or save code. They can also view the compilation time and see success or error messages. Once they finish coding, the Resource section offers learning support such as roadmaps, YouTube videos, and notes.

          (B) Enhancement The Enhancement module helps users practice coding through challenges. It begins at B1, where users search for challenges. At B2, they select the difficulty level (easy, medium, hard), and at B3, they choose a language. Then in B4, they solve the chosen challenge with access to a code editor, output viewer, and optional hints.

          © The Admin module at C1 allows administrators to manage content. Through C2, they can add learning resources,

          points earned, and total available languages, giving a sense of purpose and accomplishment.

          Login and Register Pages are part of the authentication flow. The login page asks registered users to provide their email and password to access their personalized dashboard, progress, and saved challenges. The register page allows new users to register by providing fundamental details like name,

          email, password, and role (e.g., learner or admin). These pages offer data security and allow TECHMAZE to personalize the user experience. Users are able to see features like progress tracking and real-time code submission after login, which are not accessible otherwise.

        3. Admin Page is a secure area reserved for admin users. Admins are able to add new problems, modify existing ones, define languages, modify descriptions, or remove unused problems on this page. Admins can also view registered users and moderate their activity if necessary. The admin interface is designed for control and content management to keep the platform active, fresh, and relevant. Admins differ from regular users because they have extra tools and dashboards to ensure the quality and variety of learning material across TECHMAZE.

        This is where the coding takes place. As soon as a student clicks on "Start Challenge" from the Explore or Enhancement page, he/she is taken to this page. It has a problem statement on one side and a live code editor on the other. Users can choose their language, enter their solution, and run it with input/output fields.

        The interface provides immediate feedback on correctness, creating the illusion of live coding. This page is the heart of the fusion of theoretical knowledge and the practice of coding, with skills being strengthened through direct exposure to code.

        The Profile Page is a tailored dashboard showing the learner's avatar, email address, number of challenges attempted, number of challenges completed, points earned, and languages learned. It may also include a "Sign Out" link and a panel that shows recommended challenges based on past performance. The page allows learners to monitor progress over time and to continue discovering and mastering more topics.

      2. TECHMAZES BACKEND MODEL

            1. Stage-Based Workflow of Learning and Prompt- Controlled Conversation

              TECHMAZE's backend is a deliberate, three-phase learning pipelineLanguage Choice and Problem Definition, Code Writing with AI Assistance, and Output Analysis and Debuggingeach phase powered by a Large Language Model (LLM) and specially optimized prompts. On logging into the platform, a student initially selects a programming language and selects a code problem from the problem bank by language. These are system-level prompts that ask the LLM to produce language-specific explanations, hints, and scaffolding instructions. Instead of employing a single long prompt to drive the model, TECHMAZE employs individual and optimized prompts for the reasoning of each phase in learning to avoid cognitive overload and enhance precision.

              For example, in the problem understanding phase, the model produces basic decompositions of the problem statement, resolves user confusion using natural language questions, and stimulates algorithmic reasoning. In the coding phase, the LLM offers code snippets, structural suggestions, and syntax-specific tips stepwisely. This prevents providing full solutions while facilitating learner engagement. Lastly, in the debugging and output analysis phase, the model scans the user's output or error trace and gives detailed feedback or corrections based on the runtime behavior, keeping in mind the original learning goals embedded by the instructor or system administrator.

            2. Problem Metadata Highlighting and Adaptive Code Feedback

              To customize every student's experience for their level of learning, TECHMAZE's backend dynamically annotates problem structure with metadatadifficulty level, topic tags, anticipated logic, and main concepts of interest. Not only are these metadata exposed in the UI, but an auxiliary LLM executing in the background uses them to produce adaptive feedback. When a student writes code or is stuck, the system detects associated regions of the problem or anticipated logic flow through color-coded visual indicators.

              The LLM cross-checks the user-written code against this metadata and highlights sections of code that match exactly, partially, or don't match at all the intended logic. This real- time semantic analysis leads the student directly to the exact section of code that requires fixing or optimization. Such block-level highlighting is especially beneficial for beginners, as it prevents overwhelming them with ambiguous debugging messages.

            3. Multimodal Support for Code Visuals and Explanations To fill the gap between code in text and visual comprehension, TECHMAZE integrates multimodal generation features with built-in APIs. For example, if a student types a loop or conditional statement, it can be visualized for the control flow diagram or a flowchart for illustrating the same logic using image-generation models such as Stable Diffusin with educational-template overlays. Likewise, students can ask for voice-over explanations on specific code snippets via a text-to-speech module translating LLM-provided explanations into easy-to-hear, well-paced audio snippets.

        These multimodal resourcescode-to-flowchart visualizations, input/output table visualizations, and audio narrationsare generated and cached for reuse within the platform to increase accessibility and reduce cognitive overload. Notably, filters are applied both on the input side and on the output side to ensure the generated content is purely educational and free of any type of inappropriate or irrelevant content.

        3.2.4. Progressive Code Guidance and Fine-Tuned Model Incorporation

        To facilitate more in-depth implementation-level learning within the code, TECHMAZE employs a Progressive logic- to-code aid pipeline that is powered by a fine-tuned version of GPT-3.5-Turbo. Instead of producing code directly, the model first breaks down the problem into natural language logic pieces (e.g., "Check if input is a prime number") and then types them up as structured pseudo-code. The step-by- step generation replicates the "chain-of-thought" prompting strategy, which we have discovered to greatly improve LLM reasoning on programming questions. We extracted 100+ introductory code samples from credible learning materials and Scratch-like programming environments, converted them into pseudo-code and then to abstract syntax trees (ASTs). The ASTs are intermediate forms for the LLM to generate structured, modulable code with simpler debugging and visualization. In testing, BLEU and F1-score were employed to measure the quality of code generated from the base model and fine-tuned model, reflecting significant improvement in structure alignment and fewer hallucinations in logic generation. The platform also leverages edit-distance calculations between pseudo-code and resulting language- specific syntax to enable improved code visualization and auto-correction when students enter keywords or logic incorrectly.

      3. IMPLEMENTATION

    The TECHMAZE platform has been constructed with a mix of the most recent front-end and back-end technologies to provide scalability, interactivity, and smooth AI integration for a rich learning experience in coding. The user interface has been constructed with React.js so that the platform is able to provide a dynamic, component-based, and responsive structure across devices. For managing consistent state changes across the input of code, visualization of the output, and feedback from AI, we used Redux Toolkit for effective state management. The UI elements like code editors, language buttons, and AI-assist cards have been styled using Tailwind CSS, giving a clean, responsive, and minimalist design language that is appropriate for learning environments. Server-side, the system's core is built using Node.js with Express due to its non-blocking I/O and concurrent scalability in the handling of user requests, especially when several students are coding and getting instant AI feedback. User profiles, history of problem-solving, submitted codes, and AI feedback are stored in the MongoDB database. For real-time interaction between students and AI, we integrated Socket.IO, which enables live code feedback, collaborative work sessions, and concurrent problem-solving capability. User authentication and secure login are managed by JWT-based token authentication, providing session security and

    scalability in a multi-user learning environment.

    For AI processing, OpenAI's GPT-4 (0613 model) has been employed through the ChatCompletion API to facilitate structured conversation, logic code generation, coding hint generation, and uncertainty resolution based on student input. The LLM is also employed to transform learner queries into

    structured input prompts utilized in image and audio generation tasks. Controlled prompt templates are utilized to trigger the AI with variations based on learning stages (problem comprehension, coding, or debugging) to provide consistent learning paths and prevent AI drift. The prompts are dynamically injected into the API payload based on real- time user activity, language context, and difficulty level.

    To facilitate easier access, we added text-to-speech functionality through Google Text-to-Speech API, reading out descriptions and supporting hearing learners. At the same time, speech-to-text functionality is powered by OpenAI's Whisper model, allowing students to pose questions or describe problems verbally. The multimodal support provides an accessible learning setup. However, recognizing that classroom environments might have overlapping voices, we also added an optional text interface, allowing learners to interact with the AI assistant quietly without voice inputs.

    For protection of students from objectionable or offensive content, another robust moderation layer in the form of OpenAI's in-house content filter is employed, which filters all outgoing and incoming messages for hate, violence, explicit content, or sensitive subject matter. Moderation logic is performed at both the input (the students' questions) and output (the responses provided by AI) levels to ensure a purely academic environment at all times. The software also offers localized caching of highly used prompts and answers, which reduces API latency and operational costs significantly. In general, the system design couples AI, database performance, UI responsiveness, and real-time collaboration, all blended together to ensure that TECHMAZE is a secure, smart, and interactive learning platform for aspiring coders.

    4. EXPERIMENT

    To investigate how TECHMAZE affects students' programming learning experience and results, we conducted a within-subject study. We used traditional coding practice, which means manual problem-solving without smart assistance, as a baseline. A total of 24 undergraduate students from the computer science department took part in this experiment

    Each participant used both systems. First, they engaged in traditional practice and then in TECHMAZE, across two different sessions spaced one week apart. Each session involved solving programming tasks related to specific languages (Python, C, or C++) using a guided interface.

    The goal was to evaluate how TECHMAZE, as an AI-assisted learning tool, improves the learning process, coding speed, and understanding of basic concepts. Specifically, this experiment aimed to answer the following research questions: RQ1: How does TECHMAZE help students reach their programming learning goals by offering structured guidance and smart feedback?

    RQ2: How does TECHMAZE affect the quality of code and aid in the development of computational thinking skills, including logic building and problem-solving?

    RQ3: In what ways does TECHMAZE promote creative and independent learning by giving access to live compilers, visual guides, and a clear user interface?

    RQ4: What do programming instructors and educators think about using AI-integrated platforms like TECHMAZE in the academic curriculum, and what suggestions do they have for improvement?

    During the experiment, students completed specific tasks using TECHMAZE. These tasks included debugging exercises, logic design, and live coding sessions. We also gathered data through screen recordings, platform analytics, and student feedback using questionnaires and interviews. The study aimed to understand how usable and effective TECHMAZE is in real classroom settings.

      1. PARTICIPANTS

        The experimental trial of testing the TECHMAZE platform was carried out in a controlled academic environment of a university classroom in Mangalore, India, in the spring of 2025. 30 undergraduate students from the Bachelor of Computer Applications (BCA) and Master of Computer Applications (MCA) courses were chosen to be part of the experiment. All of them had undergone a minimum of onesemester of basic programming courses and were familiar with writing and compiling programs using languages like Python and C++. They also had experience with basic debugging and the use of online code compilers.

        But most of the students hadn't had adequate exposure to real-time AI code support, smart code editors, or guided learning processes such as TECHMAZE. Their exposure to advanced programming techniquese.g., modularization, optimized logical flow, or incorporating AI support tools into the programming processremained in its nascent stage. That's why they were the best candidates to test the impact of TECHMAZE on coding proficiency, logical reasoning, and work performance.

        To eliminate teaching heterogeneity, the same instructor taught and guided all participants throughout the study period. The instructor was a proficient computer science educator with over six years of teaching experience and had also undergone training in the use of the TECHMAZE platform prior to the experiment. The classroom environment, software facilities, and technical setup (e.g., IDE, internet, and used systems) for the control and experimental sessions were identical in order to make both equal and comparable.

        Both students employed two systems during the experiment: an ordinary programming environment (normal online compiler without support from AI) and the TECHMAZE system (with its native AI support and guided coding interface). These two systems were used in two separate but consecutive sessions, with students being tasked with similar coding tasks based on pre-defined themes like "Create a Quiz App" or "Build a Temperature Converter." All the tasks were chosen to be novice-level but still capable of

        emphasizing differences in logic development, code structure, and quality of completion.

        All the participants were speakers of the English language and had experience in reading technical manuals beforehand. Students were also given certificates of contribution and bonus points towards internal assessment as a reward for being a student.

      2. PROCEDURE

        To investigate systematically the impact of TECHMAZE on the programming creativity and logical development of the students, we conducted a within-subject study where each student participated in two independent theme-based programming sessions. To minimize the chance of learning fatigue or memory transfer from one session to the other, a one-week gap was maintained between the two sessions. The study used a counterbalanced design where both the programming system (TECHMAZE vs. the conventional online compiler) and the programming theme (Task A vs. Task B) were counterbalanced within participants. This ensured that each participant had an equal amount of exposure to each of the two systems in both comparable and different conditions.

        Each session took about 90 minutes. The first 20 minutes were left open for the students to work independently and familiarize themselves with the TECHMAZE system. Throughout the 20 minutes, students were allowed to work using the code editor, browse AI suggestions, and familiarize themselves with the organized layout of the interface. After that, there was a 10-minute Q&A session conducted by researchers to sort out any confusion and have the students feel comfortable with the use of the platform. After that, the teacher took 20 minutes to introduce the assignment for the day, outlining both programming goals and technical specifications. The remaining 40 minutes of the session were devoted to actual development of the application, where students used either TECHMAZE or the traditional compiler to develop their projects.

        To maintain consistency between sessions, all the software interfaces (including TECHMAZE) were configured to English, and the same hardware configuration was used. Researchers did not offer explicit support while programming except in the case of technical issues or application bugs faced by the students. Screen shots were captured using desktop software to record all the students' activities for each session, and end project files were collected in .py, .c, or .html formats based on the programming language used.

        After each session, students were asked to fill in a Creativity Support Index (CSI) questionnaire assessing to what degree the system supported them in their ideation, flow, and technical clarity while programming. For deeper comments, semi-structured interviews were also conducted with the students. The interviews were on their experience with the features in TECHMAZE like the real-time code

        suggestions, structured learning flow, and AI-based logic expansion.

        In addition to this, we also wanted to see the long-term cognitive effect of collaborating with TECHMAZE. To this end, a pilot cohort of three students was selected for a special four-week learning module over the semester break. These students developed a project every week with TECHMAZE with the course instructor's supervision. Pre-test and post-test were conducted using a validated Computational Thinking (CT) Skills Survey and follow-up interviews to see how TECHMAZE affected their capacity for breaking down problems, structuring logic, and translating ideas into functioning code.

        Aside from examining from the student-centered point of view, we also wanted to observe how the teachers perceived TECHMAZE. Six teachers with previous experience teaching programming and coding with first-year students using visual logic were interviewed for one hour. The interviews began with an open-ended discussion of their current teaching approach, challenges in encouraging creativity in coding, and how they would see AI-powered instruments in the classroom. Then, the educators were introduced to TECHMAZE and its basic operation. They were asked to weigh the strengths, weaknesses, and ethical issues of the system, e.g., risk of dependency or AI bias in the recommendations. Lastly, they provided feedback on whether and how they would want to incorporate TECHMAZE to their teaching, and how it holds up against other tools like ChatGPT or conventional compilers. All remote interviews were conducted with teachers using software such as Google Meet or Zoom and were recorded with permission for analysis. They were instrumental in determining the ultimate recommendations and future evolution of the TECHMAZE system.

      3. MEASUREMENTS

        In order to critically assess the efficacy of TECHMAZE in enhancing students' creative programming skills and computational thinking, a strong multi-dimensional measurement approach was followed. The measurement framework was designed in a way that qualitative and quantitative data would be combined, so that a holistic and varied representation of how students interact with the platform, what learning is achieved, and what students and teachers perceive in terms of its usability would be obtained. Data collection took place in six steps and included six types of information. These were objective performance measures from real student work, teacher expert judgments, graphical mind map analysis, questionnaire feedback self- report, computational thinking skill ratings, and artifact- based interviews to probe deeper thinking and affective

        engagement.

        Among the quantitative metrics used was the Dr. Scratch rubric, a benchmark scoring rubric that assesses students'

        Scratch projects on seven dimensions of computational thinking: abstraction, parallelism, logic, synchronization, flow control, interactivity, and data representation. Each of the seven dimensions was scored on a scale of 0 to 3, with better use of coding concepts and more effective embedding of logical flow indicated by higher scores. The rubric allowed us to systematically measure the technical richness of each student's coding solution.

        The students' subjective experience of the creativity support was measured using the Creativity Support Index (CSI) questionnaire. The standardzed survey tool measured the extent to which TECHMAZE supported their ideation, was enjoyable and exciting, took them to new frontiers, and assisted in problem-solving. The 12-statement questionnaire on six dimensions like enjoyment, expressiveness, and collaboration employed a 5-point Likert scale to rate. The students completed this questionnaire in a quiet room at the end of the session without peer influence or distractions. The second major indicator was the number of nodes within the mind map, employed to measure richness and depth in the planning and coding reasoning of the students. Based on previous educational research, we examined the number of nodes in total (graphic objects) constructed in the TECHMAZE domain while project development was being conducted. Nodes were either creative components (e.g., characters, sounds, images) or programming reasoning nodes (e.g., loops, triggers, conditions, and output actions). The more nodes, in general, the richer and more comprehensive the solution space constructed by the student, most often the product of increased planning and greater commitment.

        To determine how TECHMAZE affects logical thinking and computational thinking in the long run, a set of students completed a Computational Thinking (CT) Skills Survey before and after a four-week learning experience. The survey tested students on the basis of five skills, which are critical: cooperativity, creativity, critical thinking, algorithmic thinking, and problem-solving. Four questions were provided in each area, and all the answers were scored on a 5-point Likert scale. Pre-test and post-test analysis allowed us to observe whether any significant improvement in the knowledge and application of CT skills among the students was achieved over time through sustained exposure to TECHMAZE.

        Finally, we conducted artifact-based interviews with students, using students' mind maps and coding projects as concrete points of reference to inform discussion. The interviews were structured around three broad themes: how students had tackled and interpreted the programming task (as defined by teacher objectives), how they had overcome challenges while building their projects, and what they had learned about the TECHMAZE environment. Questions were open-ended and addressed such questions as whether students felt that they had successfully communicated their ideas, what technology issues they had encountered, how they had resolved issues, and what they had found most helpful about TECHMAZE. This qualitative method

        provided us with a richer sense of student thinking processes, affective responses, and tool use than could be derived from metrics. Through this rigorous and multi-dimensional measurement process, we saw to it that both the cognitive outputs and creative productions pertaining to TECHMAZE were evaluated fairly, deeply, and from various perspectives.

      4. DATA ANALYSIS

    For the measurement of the effect of TECHMAZE on student learning to be valid and scientific, a counterbalanced experimental design was used. The design precluded order of exposure bias (which system the students were first exposed to) or theme of creative programming task. Through the systematic rotation of the tool pairs (TECHMAZE vs. baseline Scratch) and task themes among different participants, the research team precluded these extraneous variables, isolating the true effect of TECHMAZE itself on the students' performance.

    To conduct statistical analysis, a paired t-test was chosen as the main procedure to compare results from both systems. The test was chosen since all participants used both systems (Scratch and TECHMAZE) and, as a result, a test comparing paired observations was suitable to use. This helped the researchers determine if there were meaningful differences in students' performanceproject quality, creativity indexes, or computational thinking measuresbased on the platform used.

    To maximize the robustness of the results and avoid false positives, the researchers conducted a Bonferroni correction of the results. This statistical adjustment is especially useful when conducting multiple comparisons, as it keeps the risk of Type I errorsfalsely identifying a difference where none existsto an absolute minimum. By employing the use of this adjustment, the group employed a conservative statistical significance criterion, and any improvements that were identified as a consequence of TECHMAZE were actually significant and not a product of random fluctuation. All the raw data collected, together with the complete statistical analysis code, were thoroughly documented and made available in the supplementary material section. This renders the study completely reproducible and enables other members of the academic community to verify, criticize, or extend the research based on the same data and analysis protocols.

    1. RESULTS

        1. RQ1: Impact on Achieving Learning Objectives

          To assess how well student outcomes matched the learning objectives set by instructors, we looked at how projects created with TECHMAZE met predefined coding goals. Throughout the evaluation sessions, students using TECHMAZE showed higher task completion rates and better alignment with the intended curriculum compared to those using traditional text-based editors.

          All 20 participants in the TECHMAZE group completed their coding assignments according to instructor guidelines, which included goals like implementing conditional branching, loop structures, and compiling error-free code. In contrast, only 12 out of 20 students in the control group (traditional IDE users) fully met these criteria. This gap highlights TECHMAZEs effectiveness in supporting learning and guiding students toward educational targets.

          Instructor feedback confirmed these findings. Teachers noted that TECHMAZE's real-time error feedback, step-by- step logic builder, and language-specific code suggestions helped decrease confusion and cognitive overload for students. This allowed them to focus on problem-solving in programming. Students using TECHMAZE also showcased better project organization and clearer code submissions.

        2. RQ2: Effectiveness on Code Quality and Computational Thinking Skills

          To assess the impact of TECHMAZE on students coding skills and computational thinking (CT), we used a multi- metric rubric based on Dr. Scratchs framework, tailored for textual programming languages. The rubric included six CT dimensions: abstraction, logic, flow control, algorithmic thinking, interactivity, and debugging strategies.

          Statistical analysis with paired t-tests showed significant improvement in five out of six dimensions for the TECHMAZE group. The average total scores rose from 12.3 (SD = 3.9) in the baseline group to 17.5 (SD = 3.4) in the TECHMAZE group. Specifically, logical thinking (t(19) = 5.78, p < 0.01), abstraction (t(19) = 4.22, p = 0.002), and debugging skills (t(19) = 3.87, p = 0.008) showed notable improvements.

          During post-task interviews, students explained how the real-time hints, syntax highlighting, and pseudocode-to-code translation features improved their understanding of complex topics. One student (P7) said, When I forgot how to structure a while loop, TECHMAZE provided examples that matched my logicI didnt have to search externally. Teachers noticed improvements in students problem-solving behavior and their ability to self-correct, especially regarding debugging loops and handling nested logic.

          Pre- and post-course surveys also showed that students using TECHMAZE experienced measurable gains in algorithmic thinking, planning, and breaking down tasks. This suggests that TECHMAZE not only improves coding skills but also promotes higher-order thinking in programming.

        3. RQ3: Effectiveness on Creativity and Learning Engagement

      Another goal of our research was to explore how TECHMAZE affected students creative engagemnt during programming tasks. We used both quantitative and qualitative measures to evaluate creativity support, including self-reported Creativity Support Index (CSI) scores and an analysis of student-submitted projects.

      On average, students using TECHMAZE scored significantly higher in creativity areas like exploration (M = 4.2, SD = 0.65) and expressiveness (M = 4.0, SD = 0.73) compared to students in the baseline group (M = 3.1, SD =

      0.82 and M = 3.0, SD = 0.77 respectively). A paired sample t-test revealed significant differences in collaboration (t(19)

      = 3.56, p < 0.01), immersion (t(19) = 3.94, p = 0.003), and the results-worth-effort factor (t(19) = 4.18, p = 0.001).

      Students were seen exploring optional modules like theme customization, language switching, and the real-time leaderboard, indicating deeper engagement. Many participants creatively reimagined problems using unique UI inputs and even added animations to outputs, especially in Python visualization modules. One student (P11) said, The platform made me feel like I could turn my code into a story. Additionally, the integrated AI helper in TECHMAZE allowed for various problem-solving paths, encouraging students to experiment and refine their code iteratively. This

      approach fostered both persistence and inventive thinking.

      5.4 RQ4: Educators Perspectives and Pedagogical Implications

      To evaluate TECHMAZEs suitability and effectiveness in the classroom from the teachers viewpoint, we conducted semi-structured interviews with five computer science instructors who had led TECHMAZE sessions.

      Educators praised the structured curriculum flow, AI- driven code hints, and built-in compiler as key advantages that reduced students dependence on constant guidance. T3 noted, Students have fewer questions about syntax and ask more logical ones nowits a positive shift. T1 appreciated how TECHMAZE helped connect theory with practice, especially for students moving from drag-and-drop environments to typed coding.

      However, some concerns were raised. T4 warned that dependence on code suggestions might lessen the trial-and- error aspect of learning if not balanced properly. T5 mentioned that TECHMAZE may need extra training modules to support advanced learners or competitive coders. Educators agreed on the benefits of integrating TECHMAZE into their regular curriculum and requested a teacher dashboard for tracking individual progress, peer collaboration, and identifying struggling students based on real-time data. They also raised privacy concerns and recommended that student analytics be anonymized where necessary.

    2. DISCUSSION

      In this section, we reflect on the key findings and experiences gained during the development and evaluation of our system, TECHMAZE: An Interactive Programming Learning Platform. We also outline important design considerations, system limitations, and future opportunities for improving the platform, especially in educational settings where programming instruction is increasingly essential.

        1. Design Considerations

          1. Ensuring Reliability of AI Responses and Citing Trusted Sources

            One of the challenges in TECHMAZE, identified during feedback sessions with educators and users, is the trustworthiness of AI-generated responses. When users rely on TECHMAZEs intelligent recommendations, especially beginners, there is a risk they may accept all AI feedback without critical thinking, even when it's inaccurate. Although TECHMAZE uses prompt structuring to generate clearer code suggestions, errors can still occur.

            To address this, future versions of TECHMAZE should consider incorporating Retrieval-Augmented Generation (RAG) techniques. This would allow the AI to pull reliable, real-time information from verified programming knowledge bases such as W3Schools, Stack Overflow (moderated), and official documentation while providing source citations. Doing so would increase transparency, help learners trust the system, and offer additional learning resources.

          2. Balancing AI Support with Independent Learning Effort TECHMAZE is designed to support the learning process through a combination of code hints, syntax scaffolding, and real-time compilation. However, based on scaffolding theory, a strong learning platform must not only provide support but also encourage students to think and act independently. Currently, TECHMAZE allows users to select a language, such as Python, C, or C++, and view relevant tutorials or documentation. As the next step, TECHMAZE could include challenge-based prompts where students must try solutions before receiving help. The system can then provide tiered hints, allowing learners to struggle constructively. Additionally, gamification elements like leaderboards, coding badges, and project-sharing features could motivate learners to explore programming beyond the minimum task requirements, enhancing motivation and self-driven learning.

          3. Providing Teachers with Real-Time Student Monitoring Tools

            One of TECHMAZEs future goals is to empower educators with data-driven insights into student performance. While students work on exercises in TECHMAZE, their interactions, such as code errors, time spent per module, and hint usage can be logged (with consent) and shown through a teacher dashboard.

            This dashboard would help educators identify:

            • Students struggling with specific concepts, such as loops or recursion.

            • Common misconceptions across the class.

            • Time taken per coding attempt.

              Future versions could include customizable AI prompts set by teachers, where the system could adjust support complexity based on classroom progress. Teachers could even set a difficulty slider, enabling AI to be more exploratory for advanced students or more guided for beginners.

        2. Limitations and Future Work

          Although TECHMAZE shows strong potential in improving interactive programming education, several limitations must be acknowledged.

          1. Limited Age and Skill Range of Test Users

            The initial evaluation involved college students who were already familiar with basic programming. This limits our understanding of how TECHMAZE performs for complete beginners, such as high school or middle school students. Future studies should test the system with a wider age range, including younger learners and non-CS students, to ensure broader accessibility and adaptability.

          2. Currently Supports Only Three Languages

            As of now, TECHMAZE supports Python, C, and C++, each linked with documentation and compiler modules. However, students learning web development or data science may benefit from languages like JavaScript, HTML/CSS, or Java. Expanding TECHMAZE to support multiple languages, including visual block-based ones like Scratch, could greatly increase its reach and application across different learning levels.

          3. No Built-in Assessment or Feedback Mechanism While TECHMAZE provides a compiler and coding environment, it currently lacks automated evaluation or feedback mechanisms. Learners do not receive direct feedback on:

            • Code readability

            • Logic flow

            • Best practices

              Future versions of TECHMAZE should integrate AI-based assessment rubrics to evaluate student submissions based on code quality, correctness, efficiency, and adherence to problem requirements. This would transform TECHMAZE from a simple practice tool into a self-learning system.

          4. Ethical Use and Privacy Concerns in AI Integration With the rising use of Generative AI, ethical concerns must be addressed. Though TECHMAZE uses prompt filtering to avoid harmful content, theres still a chance for bias, misinformation, or inappropriate output, especially when reaching a broader user base.

      To ensure safe use, ECHMAZE should:

      • Include a moderation layer, like OpenAIs moderation API.

      • Allow reporting and flagging of harmful or irrelevant responses.

      • Inform users of limitations and best practices for using AI responsibly.

      Additionally, data privacy is crucial. Any logging of student progress or questions should comply with local data protection regulations, such as GDPR and COPPA, and include opt-in policies.

      Conclusion of Discussion

      Overall, TECHMAZE serves as a promising step forward in combining interactive coding practice with intelligent support systems. While current results show positive engagement and learning outcomes, further research and development are needed to improve response reliability, broaden applicability

      across age groups and languages, and ensure ethical and personalized learning experiences.

      By integrating adaptive learning, real-time assessments, and educator tools, TECHMAZE can evolve into a powerful platform for the future of programming education.

        1. LIMITATIONS AND FUTURE WORK

          Although TECHMAZE shows strong potential in improving interactive programming education, several limitations must be acknowledged.

              1. Limited Age and Skill Range of Test Users

                The initial evaluation involved college students who were already familiar with basic programming. This limits our understanding of how TECHMAZE performs for complete beginners, such as high school or middle school students. Future studies should test the system with a wider age range, including younger learners and non-CS students, to ensure broader accessibility and adaptability.

              2. Currently Supports Only Three Languages

                As of now, TECHMAZE supports Python, C, and C++, each linked with documentation and compiler modules. However, students learning web development or data science may benefit from languages like JavaScript, HTML/CSS, or Java. Expanding TECHMAZE to support multiple languages, including visual block-based ones like Scratch, could greatly increase its reach and application across different learning levels.

              3. No Built-in Assessment or Feedback Mechanism While TECHMAZE provides a compiler and coding environment, it currently lacks automated evaluation or feedback mechanisms. Learners do not receive direct feedback on:

          • Code readability

          • Logic flow

          • Best practices

          Future versions of TECHMAZE should integrate AI-based assessment rubrics to evaluate student submissions based on code quality, correctness, efficiency, and adherence to problem requirements. This would transform TECHMAZE from a simple practice tool into a self-learning system.

              1. Ethical Use and Privacy Concerns in AI Integration With the rising use of Generative AI, ethical concerns must be addressed. Though TECHMAZE uses prompt filtering to avoid harmful content, theres still a chance for bias, misinformation, or inappropriate output, especially when reaching a broader user base.

                To ensure safe use, TECHMAZE should:

                • Include a moderation layer, like OpenAIs moderation API.

                • Allow reporting and flagging of harmful or irrelevant responses.

                • Inform users of limitations and best practices for using AI responsibly.

          Additionally, data privacy is crucial. Any logging of student progress or questions should comply with local data protection regulations, such as GDPR and COPPA, and include opt-in policies

        2. CONCLUSION OF DISCUSSION

      Overall, TECHMAZE serves as a promising step forward in combining interactive coding practice with intelligent support systems. While current results show positive engagement and learning outcomes, further research and development are needed to improve response reliability, broaden applicability across age groups and languages, and ensure ethical and personalized learning experiences.

      By integrating adaptive learning, real-time assessments, and educator tools, TECHMAZE can evolve into a powerful platform for the future of programming education.

    3. CONCLUSION

This paper introduces TECHMAZE, an intelligent and interactive programming platform that connects theoretical instruction with practical coding skills in the classroom. TECHMAZE is unique as an AI-powered environment that enables real-time programming, language-specific learning paths, and step-by-step problem-solving, all within a user- friendly interface.

The main innovation of TECHMAZE is its modular setup, which features instant syntax validation, a dynamic code suggestion tool, and engaging learning features like gamified challenges and language-specific editors. By addressing both logic development and code execution, TECHMAZE aids learners of different skill levels and eases the teaching burden for educators.

Our comparative study involving 40 students showed that TECHMAZE significantly boosts task completion rates, code quality, computational thinking skills, and creativity compared to traditional programming environments. Additionally, students reported increased engagement and creative expression. Educators also praised the platform for effectively guiding learners while allowing for instructional control.

Interviews with educators highlighted TECHMAZE's promise to change the classroom experience through AI support, integration of various resources, and structured learning paths. While it shows great potential, future work will aim to expand algorithm complexity support, improve critical thinking exercises, and add teacher dashboards for tracking performance in real time.

Overall, TECHMAZE presents a forward-thinking model for creating intelligent, student-focused programming environments and serves as a basis for developing effective, scalable, and inclusive coding education platforms.

REFERENCES

  1. React https://reactjs.org

  2. Supabase https://supabase.io/docs

  3. Tailwind CSS https://tailwindcss.com

  4. TypeScript https://www.typescriptlang.org

  5. PostgreSQL https://www.postgresql.org/docs/

  6. PostgreSQL https://www.postgresql.org/docs/

  7. Codecademy https://www.codecademy.com

  8. freeCodeCamp https://www.freecodecamp.org

  9. LeetCode https://leetcode.com

  10. chatgpt

  11. google