🌏
Leading Research Platform
Serving Researchers Since 2012

Mindful Harbour: Scalable AI-Powered Mental Wellness Platform with Gemini 2.5 Integration

DOI : https://doi.org/10.5281/zenodo.19468660
Download Full-Text PDF Cite this Publication

Text Only Version

 

Mindful Harbour: Scalable AI-Powered Mental Wellness Platform with Gemini 2.5 Integration

Prof. Dharmaraj T B, Mrs. Sowparnika M., Sajeetha S, Dhiwakar E, Sakthi Pradeep T, Thava Vishwa S

Department of Information Technology, PPG Institute of Technology (Autonomous), Affiliated to Anna University, Tamil Nadu, India

Abstract – Mental health problems hit about one in eight people worldwide, but getting help is hard because there arent enough professionals, care costs too much, and stigma still lingers [1]. Digital tools, like chatbots, can be scaled easily, but they often dont connect well to personal tracking or helpful wellness resources. This paper introduces Mindful Harbour, a web platform that pairs Googles Gemini 2.5 Flash chatbot with mood tracking and wellness tips, making emotional support easier to access and grow. We built the system with a Model- View-Controller setup using Flask, SQLite, and the Gemini API. We also used context-aware prompting, so the answers stayed empathetic and safe. We checked the platform with unit, integration, system, and load tests (98% unit coverage) using 100 simultaneous users, and we also ran a small usability test with real people. Our unit tests hit 98% coverage. All integration and system tests passed, and the load tests averaged 2.4 seconds per response with only a 0.5% error rate. The usability study found users finished 95100% of tasks and felt very satisfied, saying the AIs empathetic responses and mood tracking kept everything going smoothly. Safety filters stopped all harmful results in over 50 test cases. Mindful Harbour shows how AI platforms can close key gaps in mental health care. It blends chat-based AI with long-term mood tracking in an easy web tool, giving people a judgment-free, scalable way to handle mild anxiety and stress. Next, well add memory that carries across sessions and link in binary crisis detection models to improve safety and ongoing continuity.

Keywords – Artificial Intelligence, Mental Health, Chatbot, Digital Intervention, Flask, Gemini API, Mood Tracking, Accessible Care.

  1. INTRODUCTION

    Mental health is essential to peoples well-being worldwide, but its still one of the most ignored parts of public health. The World Health Organization says about one out of every eight people around the world has a mental health condition. Even with how common mental health needs are, getting help is still hard: there arent enough mental health professionals, therapy can cost about $60$120 a week online, and stigma keeps many people from reaching out [1, 2].

    To deal with these problems, more digital mental health tools have appeared. At first, people built simple information websites, or they created structured apps for CBT. Still, many of these solutions have major shortcomings. Apps like Headspace may be cheap and soothing, but their chats dont feel deep or tailored to you. On the other hand, CBT chatbots such as Woebot use research-backed methods, but they can seem stiff and dont always feel as natural or flexible as real conversation [3, 4]. Services such as BetterHelp connect you with licensed professionals, but many people still cant afford it, and there are often waiting times too[5].

    Theres a missing platform in todays digital world, one that blends a caring, chat-based helper with mood tracking, all wrapped in an easy-to-use web interface. Because we noticed people looking for emotional support late at night when no professional help is available, this study set out to create a complete, all-in-one solution [6, 7]. This paper presents Mindful Harbour, a web app that offers quick, anonymous, nonjudgmental emotional support.

    Mindful Harbour uses Googles Gemini 2.5 Flash AI to chat in a natural, caring way, while keeping mood tracking and first checks secure, offering a steady, scalable option instead of old-style care. The platform protects your privacy and works all day, every day, serving as a helpful add-on for people dealing with mild anxiety, stress, and mood swings.

  2. LITERATURE REVIEW

    Digital tools for mental health work, and research proves it. Meta-analyses show that digital CBT can meaningfully lower depression and anxiety symptoms, and its results are about as strong as traditional in-person therapy. Conversation agents, in particular, are showing real promise. In a randomized controlled trial, Woebot helped young adults feel less anxious than a simple information group, showing a clear impact (Cohens d = 0.44) [3].

    More than standard CBT, empathy-focused chatbots such as Wysa have delivered real results, people report feeling noticeably better after talking with the AI [4]. Research on AI empathy suggests that large language models can answer with strong emotional accuracy, but these conversations are still simulated, so they must be built carefully to prevent harm. It points to the fields biggest worry: safety. Careful prompt engineering and content checks help reduce risks, like the AI giving dangerous advice or pretending to offer therapy it cant truly provide.

    A review of current platforms shows a big gap in how well they integrate and personalize, as Table I shows. Teletherapy can help with expert care, but its expensive and not immediate. Meditation apps are great for self-help, but they dont offer interaction. CBT apps use proven methods, but they can feel rigid. Peer support sites feel compassionate, but the quality is hit-or-miss. Mindful Harbour closes the gap by bringing everything together in one place, mixing the freedom of a chat-like AI with the steady, personal follow-up of mood tracking.

    PLATFORM TYPE STRENGTHS LIMITATIONS
    Teletherapy (BetterHelp) Licensed professionals, evidence-based care High cost ($60 120/week), wait times
    Meditation (Headspace) Low cost, calming and polished UI No conversation, generic content
    CBT Apps (Woebot) Structured therapy, evidence-based techniques Rigid interactions, limited conversational flexibility
    Peer Support (7 Cups) Human empathy, free access Inconsistent quality of support, lack of clinical oversight
    Mindful Harbour 24/7 AI

    conversation, integrated mood tracking, scalable

    Non-clinical, limited session memory

    Table 1: Comparative Analysis Of Digital Mental Health Platforms

  3. METHODOLOGY

    Mindful Harbour was built using a clear software process, putting modular parts first, making testing easy, and designing around what users need.

    1. Development Framework

      The app was created with the Model View Controller (MVC) design pattern [18].

      Backend: Python Flask was used to manage routes, authentication, and business logic (controllers) [11], [12].

      Frontend: Jinja2 templating engine was used to dynamically render HTML/CSS views [26], [27], ensuring a responsive user interface that adapts to various devices.

      AI Integration: The Gemini 2.5 Flash API was integrated to power the chatbot [13]. SQLite acted as a simple, lightweight file database, holding user login details, mood check-ins, and the first assessment results (models) [14]. For the front end, we used Jinja2 to build HTML and CSS pages on the fly, giving users a responsive interface that works well on different device.

      3.2. Testing Strategy

      We used several levels of testing to make sure the system worked reliably and performed well.

      Unit Testing: Tested individual components such as database models, password hashing, and API request formatting. Unit testing covered each part on its own, including database models, password hashing, and how API requests were formatted.

      Integration Testing: Validated interactions between modules, such as the user registration flow (web form database write

      login) and the chat flow (AJAX POST Flask route Gemini API JSON response).

      System Testing: Simulated complete user journeys (e.g., user registration initial assessment mood check in AI conversation dashboard review) to ensure end to end functionality.

      Load Testing: Simulated 100 concurrent users interacting with the chat endpoint to measure average response time and error rate.

      Usability Testing: A preliminary study was conducted with 10 users who were asked to complete key tasks (registration, mood tracking, and initiating a chat) while being observed [45]. Reached 98% test coverage.

  4. SYSTEM ARCHITECTURE

    The chatbot system proposed consists 5 main components:

      1. User Interface Layer
        • User interaction via web interface
        • Built with HTML, CSS and JavaScript
      2. Application Layer
        • Flask Web Server – to handle user requests.
        • Chatbot communication API endpoints
      3. Conversation Manager
        • Keeps the conversation moving
        • Maintain context and session state
      4. AI Processing Layer
        • Leverages a generative AI model (Gemini / LLM)
        • Produces context aware and empathetic responses
      5. Session and Privacy Management
        • Saves temporary conversation context
        • Ensures privacy protection

    Fig. 1: Structural Architecture

    Fig. 2 Analysis Architecture

  5. EXPERIMENTAL RESULTS

    The Mindful Harbour review checked how reliably it worked, how it handled heavy use, and how users experienced it.

      1. Functional and Performance Testing

        All main features worked in testing. The system logged users in, ran database tasks, and kept AI chats going smoothly, with no serious errors. The web apps performance stayed within acceptable limits (see Table 2).

        TEST TYPE TESTS RUN PASSED KEY METRICS
        Unit (Database) 24 24 98%
        Unit (Auth) 15 15 100%
        Integration (Chat) 12 12 100%
        System (User Journey) 5 5 100%
        Load (100 concurrent users) N/A N/A 2.4s avg. response, 0.5% error rate
        AI Safety 50+

        scenarios

        100% Zero harmful outputs

        Table 2: System Performance and Test Results

      2. Usability Study

        The usability study showed users were very satisfied and found it easy to use. Participants finished registering in about 45 seconds on average. Task results were completed between 95% and 100%. People said the AI felt caring and not judgmental, and they liked being able to track how their mood changed over time. Users said they wanted the app to remember them for longer, because it didnt keep their context from one login session to the next.

      3. ROC Analysis for Safety Classification

    Mindful Harbours latest version is built for helpful conversation, not for simple yes-or-no tasks like crisis detection. So, the usual ROC curve wont work for the main AI task. To keep things safe, we used prompt engineering and output filtering, then tested them with 50-plus tricky inputs meant to pull out harmful advice. It passed every test and produced no harmful replies.

    Fig. 5: Personal Dashboard

    Fig. 3: ROC analysis

  6. RESULT & DISCUSSION

    Mindful Harbour was tested, and all went well: 24 database unit tests were run, all 24 passed, and code coverage reached 98%.

    • Integration tests (chat): 12 tests executed, 12 passed.
    • System tests (user journeys): 5 tests executed, 5 passed (100%).
    • Load test (100 concurrent users): Average response time was 2.4 seconds with an error rate of 0.5%.
    • AI safety test: 50+ adversarial scenarios were tested, resulting in zero harmful outputs.
    • Usability study: Task completion rate ranged from 95% to 100%, and the average registration time was 45 seconds.

    Fig. 4: Login Page

    Fig. 6: Support Chat Conversation

    Fig. 7: Media Lounge

    Fig. 8: Relaxation (Breathing exercise)

    Fig. 9: Relaxation (Yoga and exercises)

    Fig. 10: Sleep Coach (Track Sleep & Improve rest)

    Fig. 11: Weekly Progress Check-in

  7. CONCLUSION AND FUTURE WORK

The project built and tested Mindful Harbour, a web platform that pairs an AI chatbot with mood tracking and wellness resources, making mental health support easy to access. The system uses Flask, SQLite, and the Gemini 2.5 Flash API, and its organized with a Model View Controller design.

After thorough testing, unit tests at 98% coverage, integration and system tests passing every time, and load tests with 100 users, the platform held steady, averaging 2.4 seconds per response and only a 0.5% error rate. With real users, the usability test hit a 95100% completion rate, and most people finished registration in about 45 seconds. Users said they felt understood by the AIs caring replies, and they valued mood tracking for reflecting on themselves. After testing in 50-plus tricky adversarial cases, we saw no harmful outputs, proving the prompt design and safety filters worked.

Users said the biggest gaps were no memory between sessions for continuity and no automatic system to spot crises early. Next steps include saving ongoing conversation summaries, adding a binary crisis detector, and moving everything to a cloud database so it can scale. Mindful Harbour is a simple, judgment-free tool that people can use to handle mild anxiety and stress. It also bridges the gap between basic meditation apps and expensive therapy.

Future work: Implementing memory that works across sessions so the AI can remember earlier chats after you log back in, and Ill include a binary crisis detector to spot dangerous inputs and respond right away.

REFERENCES

  1. World Health Organization, World mental health report: Transforming mental health for all. Geneva: World Health Organization, 2022.
  2. J. Torous and L. W. Roberts, Needed innovation in digital health and smartphone applications for mental health: Transparency and trust, JAMA Psychiatry, vol. 74, no. 5, pp. 437438, 2017.
  3. K. K. Fitzpatrick, A. Darcy, and M. Vierhile, Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomizd controlled trial, JMIR Mental Health, vol. 4, no. 2, p. e19, 2017.
  4. B. Inkster, S. Sarda, and V. Subramanian, An empathy driven, conversational artificial intelligence agent (Wysa) for digital mental well being: Real world data evaluation mixed methods study, JMIR mHealth and uHealth, vol. 6, no. 11, p. e12106, 2018.
  5. A. S. Miner, A. Milstein, and J. T. Hancock, Talking to machines about personal mental health problems, JAMA, vol. 318, no. 13, pp. 1217 1218, 2017.
  6. H. Gaffney, W. Mansell, and S. Tai, Conversational agents in the treatment of mental health problems: Mixed method systematic review, JMIR Mental Health, vol. 6, no. 10, p. e14166, 2019.
  7. A. N. Vaidyam, H. Wisniewski, J. D. Halamka, M. S. Kashavan, and J. B. Torous, Chatbots and conversational agents in mental health: A review of the psychiatric landscape, Can. J. Psychiatry, vol. 64, no. 7, pp. 456 464, 2019.
  8. E. Bendig, B. Erb, L. Schulze Thuesing, and H. Baumeister, The next generation: Chatbots in clinical psychology and psychotherapy to foster mental healthA scoping review, Verhaltenstherapie, vol. 29, no. 4, pp. 266280, 2019.
  9. K. Kretzschmar, H. Tyroll, G. Pavarini, A. Manzini, and I. Singh, Can your phone be your therapist? Young peoples ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support, Biomed. Informat. Insights, vol. 11, p. 1178222619829083, 2019.
  10. A. Følstad and P. B. Brandtzaeg, Users experiences with chatbots: Findings from a questionnaire study, Qual. User Exp., vol. 5, no. 1, pp. 114, 2020.
  11. M. Grinberg, Flask web development: Developing web applications with Python, 2nd ed. OReilly Media, 2018.
  12. A. Ronacher, Flask documentation (2.0.x), 2021. [Online]. Available: https://flask.palletsprojects.com/
  13. Google, Gemini API documentation, 2024. [Online]. Available: https://ai.google.dev/docs
  14. M. Owens and G. Allen, SQLite documentation, 2020. [Online].

    Available: https://www.sqlite.org/docs.html

  15. Python Software Foundation, Python 3.9 documentation, 2023. [Online]. Available: https://docs.python.org/3.9/
  16. R. T. Fielding, Architectural styles and the design of network based software architectures, Ph.D. dissertation, Univ. California, Irvine, 2000.
  17. M. Fowler, Patterns of enterprise application architecture. Addison Wesley, 2002.
  18. E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design patterns: Elements of reusable object oriented software. Addison Wesley, 1994.
  19. A. Hunt and D. Thomas, The pragmatic programmer: From journeyman to master. Addison Wesley, 1999.
  20. R. C. Martin, Clean code: A handbook of agile software craftsmanship. Prentice Hall, 2008.
  21. American Psychological Association, Publication manual of the American Psychological Association, 7th ed. American Psychological Association, 2020.
  22. J. S. Beck, Cognitive behavior therapy: Basics and beyond, 2nd ed. Guilford Press, 2011.
  23. J. Kabat Zinn, Full catastrophe living: Using the wisdom of your body and mind to face stress, pain, and illness. Bantam Books, 2013.
  24. M. M. Linehan, DBT skills training manual, 2nd ed. Guilford Press, 2014.
  25. National Institute of Mental Health, Mental health information, 2023. [Online]. Available: https://www.nimh.nih.gov/health
  26. MDN Web Docs, HTML: HyperText Markup Language, 2023. [Online]. Available: https://developer.mozilla.org/en-

    US/docs/Web/HTML

  27. MDN Web Docs, CSS: Cascading Style Sheets, 2023. [Online].

    Available: https://developer.mozilla.org/en-US/docs/Web/CSS

  28. MDN Web Docs, JavaScript, 2023. [Online]. Available: https://developer.mozilla.org/en-US/docs/Web/JavaScript
  29. W3C, Web Content Accessibility Guidelines (WCAG) 2.1, 2018. [Online]. Available: https://www.w3.org/TR/WCAG21/
  30. R. Fielding et al., Hypertext Transfer Protocol HTTP/1.1, IETF RFC 2616, 1999.
  31. OWASP Foundation, OWASP Top Ten, 2023. [Online]. Available: https://owasp.org/www-project-top-ten/
  32. D. Stuttard and M. Pinto, The web application hackers handbook: Finding and exploiting security flaws, 2nd ed. Wiley, 2011.
  33. M. Zalewski, The tangled Web: A guide to securing modern web applications. No Starch Press, 2011.
  34. G. J. Myers, C. Sandler, and T. Badgett, The art of software testing, 3rd ed. Wiley, 2011.
  35. L. Crispin and J. Gregory, Agile testing: A practical guide for testers and agile teams. Addison Wesley, 2009.
  36. L. Copeland, A practitioners guide to software test design. Artech House, 2004.
  37. L. Floridi and J. Cowls, A unified framework of five principles for AI in society, Harvard Data Sci. Rev., vol. 1, no. 1, 2019.
  38. A. Jobin, M. Ienca, and E. Vayena, The global landscape of AI ethics guidelines, Nature Mach. Intell., vol. 1, no. 9, pp. 389399, 2019.
  39. D. D. Luxton, Recommendations for the ethical use and design of artificial intelligent care providers, Artif. Intell. Med., vol. 62, no. 1, pp. 110, 2014.
  40. N. Bostrom and E. Yudkowsky, The ethics of artificial intelligence, in The Cambridge Handbook of Artificial Intelligence, Cambridge University Press, 2014, pp. 316334.
  41. K. Schwaber and J. Sutherland, The Scrum guide, 2020. [Online].

    Available: https://scrumguides.org/

  42. K. Beck et al., Manifesto for Agile Software Development, 2001. [Online]. Available: https://agilemanifesto.org/
  43. Project Management Institute, A guide to the project management body of knowledge (PMBOK guide), 7th ed. Project Management Institute, 2021.
  44. International Organization for Standardization, ISO/IEC 25010:2011 Systems and software engineering Systems and software Quality Requirements and Evaluation (SQuaRE) System and software quality models, 2011.
  45. International Organization for Standardization, ISO 9241-11:2018 Ergonomics of human system interaction Part 11: Usability: Definitions and concepts, 2018.
  46. Health Level Seven International, HL7 FHIR Release 4, 2023. [Online]. Available: https://hl7.org/fhir/
  47. General Data Protection Regulation, Regulation (EU) 2016/679 of the European Parliament and of the Council, 2016.
  48. Flask Mega Tutorial, 2023. [Online]. Available: https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i- hello-world
  49. Real Python, Python tutorials, 2023. [Online]. Available: https://realpython.com/
  50. Stack Overflow, Developer community, 2023. [Online]. Available: https://stackoverflow.com/
  51. GitHub, Documentation, 2023. [Online]. Available: https://docs.github.com/
  52. DigitalOcen, Community tutorials, 2023.
[Online].Available:https://www.digitalocean.com/community/tutorials