DOI : 10.17577/IJERTV14IS060176
- Open Access
- Authors : E.Sumalatha, Manoj Yadav
- Paper ID : IJERTV14IS060176
- Volume & Issue : Volume 14, Issue 06 (June 2025)
- Published (First Online): 01-07-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
A Blockchain-Enabled Web Application for Laboratory Asset Monitoring and Sentiment-Based Feedback Analysis
E.Sumalatha
M.Tech, Computer Science and Engineering
Shri Venkateshwara University Gajraula, Uttar Pradesh 244236
Manoj Yadav Computer Science and Engineering
Shri Venkateshwara University Gajraula, Uttar Pradesh 244236
Abstract- Healthcare systems rely heavily on the timely and secure availability of medical and laboratory supplies. Building upon our previously published work, which automated the procurement and tracking of laboratory items with face authentication using CNN-based models (VGG16 and MobileNet), this extension introduces advanced NLP and blockchain technologies to further enhance system intelligence and data security. In this phase, product reviews and user feedback related to laboratory items are collected and pre- processed towards the investigation of sentiments. We employ sophisticated transformer-based models, namely, BERT, RoBERTa, and a hybrid GPT-2+CNN architecture, for classifying the reviews into two groups, namely positive and negative groups, which enables rapid understanding without manual reading. Additionally, all laboratory item data is securely stored on a private Ethereum blockchain using Ganache and Solidity smart contracts, ensuring immutability and traceability. We integrate the Gemini API to allow users to generate queries via natural language commands, making the system highly interactive and user-friendly. The final outcome is a secure, smart laboratory management system capable of real-time sentiment analysis and tamper-proof data handling, contributing to a more efficient and secure the supply chains of medical care realm.
Keywords: Supply chain, Sentiment analysis, Blockchain, Face recognition, BERT, GPT-2, RoBERTa, Ethereum, CNN, Gemini API, Secure procurement.
I. INTRODUCTION
The technology of blockchain has received widespread acclaim for its capacity to drive revolution and creativity in present-day company structures and structures. It is being widely praised for its modular and collaborative nature [1]. The introduction of such tech has had a significant influence on the commercial and technological sectors. In the past few years, prominent corporations like IBM [2] have initiated attempts to develop stronger, more dependable, and cost-effective solutions for it. With advantages like programmability, scalability, improved data structures for blocks and operations, and innovative consensus mechanisms, the use of blockchain is in
increasing popularity in practical use today. As a result, academics and business are increasingly interested in applying this kind of tech to the administrative realm and its procedures. In reality, there are numerous situations in which this kind of tech can be used, ranging from logistical and shipping processes [3] to investigations on its utilization in medical care [4], food supply chains and agriculture [5], and banking and finance services [6].
Blockchain technology in the medical industry is used for managing supply chains, administration of information about patients, clinical investigations and data safety, invoicing, medicine traceability, claims determination, and other purposes. The administration of supply chains is seen as a significant usage of blockchain tech for medical purposes, according to [7].
Healthcare businesses confront several issues as a result of increasing expectations, including consumer discontent, growing expenses for healthcare, competitors, and decreasing compensation to serve. Every such circumstance pushes medical firms to build a paradigm capable of meeting such needs while also efficiently dealing with constant developments, developments in technology, rising healthcare expenses, intense rivalry, and assuring that customers are satisfied. Healthcare firms have begun to turn towards supply chain administration to minimize costs and achieve objectives. It refers to the movement of items, data, and cash between and among vendors within the demand chain in order to fulfill customer needs in the most effective manner feasible. Nevertheless, the management of supply chains for healthcare organizations provides unique issues attributable to the increased complexity and risk involved, since a disrupted supply chain can imperil the security of patients in addition to the manipulation or breach of their personal health data [7].
Sentiment analysis has made great progress in the recent few decades, paving the way for the incorporation of deep learning models, notably transformer-oriented frameworks like Bidirectional Encoder Representations from Transformers (BERT), GPT, etc. [8]. The aforesaid models demonstrated outstanding performance in different NLP domains, like sentiment analysis, by using enormous-
scale initial training on heterogeneous linguistic information and finer adjusting of task-driven metadata [9]. An significant advance in this field is the release of Robustly Optimized BERT Pretraining Approach (RoBERTa), that expands upon BERT and enhances its ability to perform by altering critical hyperparameters, methods for training, and employing a bigger dataset for the initial training phase [10]. A further significant advancement is the invention of the Efficiently Learning an Encoder that Classifies Token Replacements Accurately (ELECTRA) [11] paradigm. It reveals increased effectiveness and accuracy in comparison with BERT by adopting a stronger preliminary training assignment termed substitution token identification. Those deep learning approaches have been found useful in analyzing sentiment tasks, including categorizing thoughts expressed through online forums, reviews, and other textual streams.
Both the Named Entity Recognition (NER) [12] and sentiment investigation [12], the renowned natural language processing methods, were utilized as well for interpreting various kinds of reviews. Nevertheless, current developments in large language models (LLMs), that include BERT and GPT, have demonstrated positive outcomes in numerous tasks related to NLP, notably instance forecasting, sentiment analysis, and categorization of text [13-15]. These models, notably GPT, have shown outstanding ability to grasp and generate human- resembling language, which renders them perfect for understanding reviews.
The combination of GPT and Convolutional Neural Networks (CNNs) frameworks offers a fresh way to address the issues connected with review categorization. CNNs specialize in the extraction of features and identifying abnormalities.
Automating laboratory procedures is establishing itself as an important development in the medical field, totally altering the manner in which diagnostics tests and analysis are performed [16, 17]. The delivery of effective healthcare services is intrinsically dependent on the timely availability and secure management of medical and laboratory supplies. These supplies play a crucial role in diagnostic, therapeutic, and research activities across healthcare institutions. However, the traditional management of such supplies is often marred by manual tracking, inefficient procurement procedures, and vulnerabilities in data integrity and security. These challenges can lead to delays, stock-outs, increased operational costs, and potential compromises in patient care.
To address some of these inefficiencies, our previously published work introduced an automated system for the procurement and tracking of laboratory items, leveraging deep learning-based face recognition models such as VGG16, MobileNet, and CNNs. This approach ensured secure login access through live face authentication, thereby reducing unauthorized access and streamlining internal communication among departments.
As an extension to the previous research endeavors, we aim to develop a robust yet enhanced system that incorporates both blockchain tech and Natural Language Processing (NLP) methodologies. To be specific, we collect the reviews of the products that were used by diverse range of users from several online sources and then, subsequently pre-process every collected data to permit the classification
of sentiments. We deploy transformer-based models, namely, BERT, RoBERTa, and a hybrid GPT-2 combined with CNN for automating the analysis and subsequent classification of the reviews into two groups such as positive and negative. This classification paves the way for faster decision arriving and furthermore, eliminates the necessity for direct, manual pursual of larger text copra.
To ensure tamper-proof data storage and decentralized accessibility, all laboratory item data is recorded on a private Ethereum blockchain using Solidity smart contracts and Ganache as the local development environment. Furthermore, the system's user interface is augmented by integrating the Gemini API, which enables users to generate and execute natural language queries. This addition significantly enhances user interaction by providing a conversational interface for querying system data.
Overall, the proposed system advances healthcare supply chain automation by fusing secure access control, intelligent feedback analysis, and blockchain-based data integrity. It not only facilitates operational transparency but also introduces intelligent, real-time insight into user sentiment and supply chain conditions. This makes the solution highly suitable for modern healthcare environments striving toward efficiency, security, and data- driven decision-making.
-
LITERATURE SURVEY
The enhancement of laboratory management backed with the integration of advanced techs, has now gained prime attention. Through automations within the realm of clinical laboratories, the improvisation of efficiency, accuracy, and protection of the concerned medical devices become possible. The works published priorly in the literature reveal that automation of laboratory can reduce the manually committed mistakes, fasten the processing of specimens, and raise the pace of the assessment, which shape the patient care and make the diagnostics accurate [16].
The tech of blockchain has been a powerful solution when it comes to the development of secured, yet transparent inventory management. Past investigations reveal that the blockchain-centric systems can provide rigid data storage, thereby the traceability and reduction of discrepancies in supply chains are ensured. As a matter of fact, the management of warehousing with better security and transparency is viable with such blockchain-based inventory paradigms [1].
Sentiment analysis, that is a part of natural language processing phenomenon, is being deployed for the extraction of action-possible insights using the textual info [18]. The informed decision-making becomes possible as the sentiment analysis helps to comprehend the opinions of the stakeholders, under the perception of supply chain management. The previous research attempts have investigated the deployment of sentiment analysis in enhancing the supply chain intelligence by focussing on the critical part played by even the mere differences of stakeholder sentiments.
The synergy of the following techs: automation, blockchain, and sentiment analysis, can provide one with a comprehensive approach that can modernize the tracing of laboratory items in a healthcare setting. Through the
automation of diverse process within the laboratory setting, it becomes possible to ensure data integrity via the blockchain and obtain useful info on the reviews using the sentiment analysis. Thereby, the concerned organizations can get benefited through such enhanced operation-centric efficiency and data-based decision-making. Now, let us look at related literatures below.
[19] investigated how the healthcare sector will be affected by the automation of supply chain procedures, namely procurement. The case study demonstrates how automation improves data quality, streamlines procedures, and lowers manual mistakes to dramatically increase procurement efficiency. Purchase orders may be processed more quickly thanks to automated systems, which also improve supply chain visibility and facilitate real-time departmental collaboration. The results showed that automation decreases lead time, enhances policy compliance, and makes it easier to track the procurement phases, all of which contribute to the prompt supply of healthcare products.In order to simplify purchase order administration, [20] concentrated on creating a web-dependent procurement structure. The solution reduced processing time and minimized data input mistakes by automating the whole procurement processes, from submitting requests to placing the final purchase. They enhanced two important aspects, namely, security and accountability with the role-based access-related constraints. They were able to improvise the other critical aspects like transparency, communication, and decision-making backed by the integral alerting mechanisms, guaranteeing the administrators to receive frequent updates on the concerned aspects.
-
discussed about the smoother operation of product retrievals (taking place in the warehouse set-ups) that was made possible by improvising the workflow automation with the integrated RFID tech. As the RFID tech was integrated into their system, it helped to improvise the handling and subsequently, minimize the mistakes often by human being. This permitted the efficient tracing and identification of all products that were retrieved. The accomplished automation within our system was able to improvise accessibility of inventory, thereby the handling of the inconsistencies in a faster pace becomes possible by being able to obtain practical yet immediate updates related to goods that have arrived. The net outcomes of their work were substantial cost reductions and improved operational performance.
-
provided the special attention to the autonomous procurement processes, which can cut down the mistakes committed by human beings, and enhance the efficacy of the medical device purchasing. With such automation achieved in their system, they were able to smoothen the workflows, enable quick approval of the purchasing orders, and enhance the overall cooperation of team indulging in procurements. As the accountability was raised with the technological prospects, there were able to practically trace the procurement endeavors and subsequently report them in an accurate fashion. At the outset, reduced process-centric delays and lowered labor costs assured the right-time availability of medical devices.
-
had the objective of controlling the pharmaceutical elements inventory with the deployment of Agile-based paradigm towards the enhancement of clinical performance within the neuroscience realm. More efficiency in
managing healthcare equipment inventory was acknowledged as a problem, which might lead to supply shortages, administrative errors, and patient treatment interruptions. An Agile approach to software development was employed, starting with research into customer requirements and moving on to app development and user acceptance testing. Superior management of inventory operations, health care buying efficacy, stock administration error reduction, and user satisfaction with the system were all used in this study to show how well the system increased clinical effectiveness. Along with practical implications for efficient neurological clinical administration, this study contributes conceptually to the creation of stock administration strategy in the medical sector.
By anticipating laboratory needs and requests, [24] described the state-of-the-art in automation in pathology labs to spur technological development and evolution, with the goal of motivating new tools and processes as positively transforming help for operators, organizations, and patients.
-
suggested that the intelligent pairing of physical items with their corresponding digital twins might significantly improve customized manufacturing processes, at least for value chains that are socially pushing us to develop new markets, such as assistive technology. It ensured custom design, a quick production run, and regular follow-up maintenance that further enhanced user results, in addition to a much-needed responsive but sustainable manufacturing process.
-
examined several aspects of drug distribution, such as the setting for regulations, the development of smart hospitals, the goods provided by IT firms, and the examination of other research materials. The study examined a number of medication management topics, including the requirement for IT assistance, prescription planning and procurement, and customized recordkeeping. After that, they applied a constructional technique followed by the modelling. Finally, a business operational paradigm based on information technology was developed that can be utilized to create the full medicine administration process.
-
examined how AI may be used in healthcare, focusing on how it can improve patient experience and save expenses. Predictive analytics was highlighted in the research, which employed AI systems to investigate the past data of patients, identify trends, and forecast health- centric results. This enabled the implementation of early treatments, which can reduce hospitalizations and associated costs. Robotic Process Automation (RPA) powered by AI reduces administrative expenses by automating tasks like invoicing, developing, and handling of claims. This lowers the requirement for physical exertion, and gets rid of mistakes.
-
explained the classification of medical devices and evaluated the ways in which ChatGPT may support various aspects of medical device design, optimization, and improvement. However, it is crucial to consider limitations including the potential for misinterpreting user intent, the lack of personal experience, and the need for human supervision. Reaching a balance between ChatGPT and human expertise can result in medical devices that are safe, high-quality, and compliant. The study promoted ChatGPT in the medical device production industry while also
emphasizing the advantageous relationship between artificial intelligence and human involvement in healthcare.
-
-
CRITICAL COMPONENTS OF BLOCKCHAIN-ENABLED WEB
APPLICATION
Here, we discuss the critical components of blockchain- enabled web application via the following three sections:
-
Sentiment Classification Using NLP Models
The textual data that has been subjected to pre-processing is fed into the advanced transformer-based models for carrying out sentiment classification. For serving the localized feature extraction, we utilize three models, namely, BERT, RoBERTa, and GPT-2 combined with CNN. Both the models like BERT and RoBERTa work based on an attention mechanism, which considered two perceptions (viz., left and right perceptions) in a typical sentence, to significantly enhance the understanding of sentiment-centric differences. RoBERTa is able to perform superior across the datasets containing high lingual difference, due to the fact that it is an optimized BERT version. Likewise, the deployment of GPT-2 is prevalent owing to its robust language generation prospects, and it has the potential to capture both global context and local phrase patterns in text classification tasks when integration with CNN. Our system can predict the classes of the sentiment for every review into two groups, namely, positive and negative groups. The accomplished classifications are then stored using the appropriate metadata, which provides action-possible insights to serve inventory managers and product teams so that they can assess trends pertaining to the user satisfaction, and identify issues with specific lab items.
-
Blockchain-Based Data Storage
For the redressal of issues concerned with certain aspects like data integrity, transparency, and traceability, our system is incorporating the tech of blockchain using a platform Ethereum. By using Ganache- a local in- memory blockchain, we have implemented our blockchain layer. This blockchain is useful in two ways as follows: programming language for writing smart contracts and testing and development. The implemented smart contract hosts numerous records of every lab item like procurement details, transaction history, and review metadata, which are stored onto our blockchain. Our approach makes sure that no one transaction or data entry recorded are altered or deleted, thereby a rigid solution can be offered to medical realm. A more secured tracing and verification of data are made possible by generating a cryptographic hash that uniquely identifies the block, during the execution of every contract. Our developed decentralized structure can eliminate the risks posed by the single-point failures that are commonly prevalent in the centralized databases, and this structure can also enable multiple stakeholders (procurement, quality control, administration) in accessing
verifiable records without compromise on data confidentiality. With the assurance of tamper-resistance and non-repudiation, we redress security, compliance, and trust issues while managing critical laboratory assets.
-
Query Interaction via Gemini API
In this phase, we integrate Gemini API to include intelligent user interaction capabilities to the system. Traditional systems frequently demand the knowledge of structured query language or predefined filters for the retrieval of specific data that limits the non-technical users. The integration of Gemini API redresses this kind of constraint with the empowerment of natural language querying. Users can input free-form queries such as Show all negative feedback for chemical reagents this month or How many items were flagged as defective last week?, and the system dynamically interprets the context, maps it to backend data models, and returns accurate results. The API acts as a natural language interface (NLI) between the user and the database, translating human-like commands into machine- executable instructions. This enhances accessibility for users across departmentsbe it inventory, quality control, or managementby removing the technical barrier to data retrieval. Furthermore, it facilitates real-time decision- making and fosters a more intuitive interaction with system features, ultimately improving productivity and user experience.
-
-
PROPOSED METHODOLOGY
In our study, we carry out sentiment investigation of product review data using advanced deep learning models, specifically BERT, RoBERTa, and a hybrid GPT-2 + CNN architecture. The process consists of several steps such as data collection, pre-processing, and training of model followed by the assessment and comparison. The detailed methodology will be presented below.
-
Dataset
We have considered dataset [29] taken from GitHub, which deal with the Amazon reviews analysis from 2023. It contains useful info, permitting the sentiment analysis and laboratory asset monitoring backed with blockchain. Given its academic-style domain, it is prospective for our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis.
-
Novelty Summary of the Work
In this project, we introduce a hybrid sentiment analysis framework designed specificaly for lab item feedback (positive/negative classification). Unlike conventional transformer-based or CNN-only models, our approach integrates the contextual richness of GPT-2 embeddings with the computational efficiency of MobileNet CNN layers for classification. The key innovations include:
-
GPT-2 as a Feature Extractor Only
-
Instead of using GPT-2 for generation or full fine-tuning, we utilize its final hidden states to capture deep semantic representations of feedback text, improving generalization while reducing compute load.
-
CNN-based Lightweight Classifier (MobileNet)
-
We reshape the token embeddings into 2D spatial formats, allowing MobileNet to exploit local feature patterns within textual structures using depthwise separable convolutions. This results in a fast and memory-efficient classification pipeline.
-
Hybrid Integration Pipeline
-
The pipeline effectively decouples feature learning and classification, allowing for modular upgrades, e.g., switching GPT-2 with RoBERTa or changing the CNN architecture with minimal retraining.
-
Domain-Specific Fine-Tuning
-
The system is evaluated specifically on lab item feedback, making it applicable to educational, scientific, and R&D domains where sentiment analysis models are often underdeveloped.
-
-
Overview of our sentiment analysis
Here, we first present the block diagram denoting our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis in the following figure 1:
Figure 1 Block Diagram of our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis
The diagram depicts an advanced laboratory management platform that integrates blockchain and natural language processing. The architecture features a Web Application Layer for user interactions at the top, which connects to two specialized components: a Feedback Sentiment Module using advanced models (BERT, RoBERTa, and GPT2+CNN) to analyze user feedback, and a Blockchain Storage Module employing Ganache and Solidity Smart Contracts to securely store laboratory asset information. Between these components and the Central Database at the
bottom, we have incorporated a Gemini API Integration Module, providing natural language interfaces that allow users to interact conversationally with both the sentiment analysis and blockchain storage systems. This comprehensive architecture creates a secure, intelligent ecosystem where laboratory assets are tracked with immutable blockchain records while user feedback is continuously analyzed to improve operations.
-
Approaches used in our Methodology
Here, we brief about the three methodologies that we have used in our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis.
-
BERT
In our feedback sentiment module, we have designated BERT to be the foundation model. It processes out the reviews from the considered dataset (that hosted the data of several laboratory users). The bidirectional training approach adopted in the devised BERT makes the context understanding to take place across both directions simultaneously. Thereby, the model is effective while it captures the refined language within the laboratory review, which in turn determines the sentiment polarity of equipment and processes assessments.
BERT is based on the Transformer encoder. The two main components of each encoder layer are:
-
Multi-head Self-Attention
-
Feed-Forward Neural Network
The equation used for Input Representation has been shown below.
For given input sentence: [CLS] the product is great [SEP] Each token is represented as:
Input Embedding = Token Embedding + Segment Embedding + Position Embedding
Let:
-
T : token embeddings.
-
S : Segment embeddings.
-
P : Positional embeddings.
Then:
X = T +S + P
Where:
-
-
n: number of tokens.
-
d: hidden size (eg: 768 in BERT base)
The equation used for Multi-Head Self-Attention has been shown below.
BERT uses multi-head attention:
Attention(Q ,K, V) = Softmax ( ) V
For head i:
-
-
RoBERTa
With the deployment of the model like RoBERTa over the sentiment analysis-centric capabilities served by the BERT architecture (backed with the optimized training methods), we are enhancing the overall sentiment analysis prospects. Across this feedback sentiment module, the model RoBERTa carries out pre-processing of the textual laboratory review owing to its powerful language comprehension abilities. This ability of the model paves the way for obtaining sentiment classification with higher accuracy, provided the application of dynamic masking patterns and larger datasets-based training are done. Thereby, more reliable insights using the user reviews
Where:
= (, , )
about equipment performance are extracted.
When it comes to core architecture, RoBERTa retains the original Transformer encoder structure from BERT, which includes:
-
Q, K, V: Query, Key, Value metrics.
-
: dimension of key vectors
-
, , .
-
Multi-Head Self-Attention
-
Feed-Forward Layers
-
h: number of heads.
The equation used for Feed Forward Layer (FFN) has been shown below.
After attention, each tokens representation is passed through an FFN:
() = (0, 1 + 1)2 + 2
Where:
-
1 , 2
-
= 3072
The equation used for Residual & Layer Norm has been shown below.
Each sub-layer (attention) uses residual connections and layer normalization:
Output = LayerNorm(x + sublayer(x))
The equation used for Fine-Tuning for Classification has been shown below.
The [CLS] token output [] is passed through a classifier:
= ([] + )
Cross-entropy loss:
= –
-
Residual Connections
-
Layer Normalization
So, the key mathematical formulas are the same as in BERT.
Now, we look at the improvements over BERT (training- side changes).
Even though the model equations remain the same, RoBERTa changes how BERT is trained:
-
Dynamic Masking Instead of Static Masking
BERT masks 15% of tokens once and keeps them fixed across epochs.
RoBERTa dynamically re-samples masking during every training step.
Mathematically, at each batch:
() = (())
This avoids the model overfitting to specific masked positions.
-
Removed Next Sentence Prediction (NSP) BERT uses NSP loss:
= +
RoBERTa drops NSP, relying only on Masked Modeling (MLM):
= = log (| )
-
Bigger Batch Sizes and More Data
-
BERT: trained on ~16GB of text
-
RoBERTa: trained on ~160GB (including CommonCrawl News, OpenWebText)
Loss:
= ( [] + )
= log ()
This boosts generalization and helps reduce overfitting.
-
-
Longer Training & Larger Sequences
-
RoBERTa trains with sequences up to 512 tokens
Now, we look at the RoBERTa-base Config in the below table 1.
Table 1 RoBERTa-base Config
Parameter
Value
Hidden Size
768
Layers
12
Attention Heads
12
Max Length
512
Total Parameters
~125M
from the beginning.
-
Training time is 5x longer than BERT.
This enables:
512
instead of truncated 128-length input.
The Transformer Equations Recap (same as BERT) are as follows:
Self attention:
(, , ) = ( )
Multi Head:
-
-
-
Hybrid GPT-2+CNN
In this hybrid model GPT2+CNN, we combine the prospects yielded from two different architectures to perform the sentiment investigation. Out of the two models combined in this hybrid model, GPT-2 is responsible for the powerful autoregressive language comprehension, whereas the CNN is responsible for the extraction of critical local features from textual info within the dataset. The hybrid
= (, , )
model GPT2+CNN in our laboratory tracing system can
perform well towards the identification of specific
() = (1, . )
Feed Forward:
() = (1 + 1)2 + 2
Layer Output:
( + ())
The Fine-Tuning for Classification is given by:
For binary classification tasks (eg. Sentiment detection), the final [CLS] token is used:
[]sentiment patterns in technical laboratory realm with the retention of contextual awareness.
GPT-2 (Feature Extractor) the Mathematical View is given below.
GPT 2 is a causal Transformer decoder, but in your use case, youre only extracting token-level or sequence-level embediings. No, decoder or prediction is done.
Tokenization and embedding:
Let a tokenized input be:
= [1, 2, , ]
Each token is converted into:
-
Token embedding: ()
-
Positional encoding: ()
0 = () + () (initial input to transfomer)
Then these embeddings are passed through transformer decoder blocks (no attention to futrure tokens):
+1 = ()
Finally, you extract the last hidden layer outout:
= [, , . ]
and analysis of sentiments with the deployment of 3 advanced models, namely, BERT, RoBERTa, and a hybrid GPT-2 combined with CNN. By using these models, the sentiment classification is automated without manual intervention, which categories the reviews into two groups, namely, positive and negative. Additionally, to ensure data immutability and traceability, all records of lab item transactions are stored securely on a private Ethereum
We may also use:
1 2
blockchain using Solidity smart contracts and Ganache as the local development environment. For enhanced user interaction, the system incorporates the Gemini API,
-
CLS-like embedding (e.g., mean/max pool over tokens)
-
Fixed-length representation (e.g., reshape to 2D)
CNN Classifier: the Mathematical View is given below.
Now the features from GPT-2 are passed to a CNN classifier, a similar lightweight CNN.
Input Preparation:
Let GPT-2s output be reshaped to a 2D format:
Where:
-
H, W: Height and width from reshaping tokens.
-
C: channel = Gpt-2 hidden size
Classification layer:
After several CNN blocks, you apply Global Average Pooling and a Dense Softmax layer:
enabling users to generate dynamic queries using natural language, thereby making the interface more accessible to non-technical users. Overall, the proposed system delivers a robust, intelligent, and secure solution that automates sentiment interpretation, guarantees tamper-proof data storage, and enables transparent, user-friendly access to critical information, thus optimizing laboratory inventory management within medical equipment companies.
Now, we breakdown the important phases in our proposed methodology below.
-
Data Collection
Product reviews were collected from publicly available online platforms and repositories. The considered dataset
[29] hosts the text reviews with applicable sentiment labels. In our case, the concerned sentiment labels are as follows: negative, positive, or neutral.1
=
,
=1 =1
Figure 2 Pictorial Representation of Word cloud of Reviews from the considered dataset
= ( + )
The Loss Function for it is given by
Since this is a binary classification task (e.g., sentiment), we use Binary Cross Entropy as follows:
= [. log() + (1 ). log(1 )]
-
-
-
Important Methodology Phases
Our proposed system extends upon the traditional laboratory item tracking set-up with the integration of sophisticated techs, namely, natural language processing models and blockchain. This extension promises for more secured handling of data. In our built system, we gathered data denoting the product reviews and feedback associated with the procured lab items followed by the pre-processing
-
Text Preprocessing Using NLP
NLP techniques were employed to clean and standardize the review text. Steps involved in the Preprocessing are as follows:
-
All text lowercasing
-
Eliminating punctuation, stopwords, and special characters
-
Lemmatization to cut short words to their base forms.
These steps help improve model performance by reducing noise and redundancy in the input data.
-
-
Tokenization
Each model requires a specific tokenization format. The Hugging Face tokenizer library was deployed to tokenize the data according to the respective model architecture:
-
BERT tokenizer
-
RoBERTa tokenizer
-
GPT-2 tokenizer (with sequence input adaptation for CNN integration)
Tokenization converts the rawer text into numeric- input forms that are suitable for training of models.
-
-
Train-Test Split
The processed dataset was partitioned into training and testing sets with the deployment of ratio 80/20. This ensures that the performance of the model is assessed on unrealized data, maintaining the validity of the results.
-
Model Training
The training of every model takes place based on the training dataset:
-
BERT and RoBERTa were fine-tuned for classification using their respective transformer-based architectures.
-
GPT-2 + CNN involved feature extraction using GPT- 2 embeddings, followed by a CNN for classification.
Training was carried out with appropriate hyperparameters, batch sizes, and learning rates optimized through experimentation.
-
-
Model Evaluation
Models were assessed based on the testing dataset using common performance meaures like:
-
Precision
-
Accuracy
-
F1-Score
-
Recall
Confusion matrices and classification reports were created to provide further insight into model predictions.
-
-
Results Visualization
The comparative performance of the models was visualized through graphs and plots. These included bar charts for accuracy and F1-score, and ROC curves where applicable. Visualization helped highlight strengths and weaknesses across models.
-
Model Saving
Trained models were serialized and saved using the Hugging Face transformers and torch libraries. This enables future use and deployment without retraining.
-
-
Constructed Architectures
-
Feedback sentiment analysis Architecture for RoBERTa
Figure 3 Feedback sentiment analysis Architecture for RoBERTa of our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis
The architecture diagram illustrates a sophisticated RoBERTa-based sentiment analysis pipeline. It is designed for processing laboratory feedback data within the Blockchain-Enabled Laboratory Asset Monitoring system. Beginning with the input of raw textual data from laboratory users, the pipeline first processes this information through a "Raw Text Data" stage before advancing to comprehensive preprocessinga critical foundation that branches into four specialized text preparation techniques: text cleaning, lowercasing, adding special tokens and meaningful segments and dynamic masking. Following preprocessing, the cleaned text undergoes tokenization on one branch while simultaneously flowing through an embedding layer that transforms the linguistic elements into mathematical representations through three distinct embedding types: token embeddings, segment embeddings, and position embeddings. These combined embeddings feed into the core transformer encoder stack which is the computational heart of the RoBERTa architecture. The architecture then distributes processing across three major pathways: the multiple layers pathway, the training process pathway, and the output processing pathway. These form a comprehensive end-to-end solution that leverages sophisticated natural language processing approaches for extracting meaningful info based on the user feedback that can drive improvements in laboratory asset management and operational effectiveness within the blockchain- secured ecosystem.
-
Feedback sentiment analysis Architecture for GPT- 2+CNN
The architectural diagram shows a hybrid sentiment analysis pipeline combining GPT-2 transformers with CNNs to process lab feedback in a blockchain-secured
monitoring system. The pipeline begins with text preprocessing (cleaning, lowercasing, adding special tokens), followed by tokenization using byte-pair encoding. Embeddings transform tokens into numerical representations before passing through three specialized GPT-2 transformer layers. The output splits into two parallel paths with different convolutional and pooling operations, which are then flattened, processed through dense layers, and finally merged. A sigmoid function produces binary sentiment classification scores, with system performance measured through accuracy and F1- score metrics.
Figure 4 Feedback sentiment analysis Architecture for GPT-2+CNN of our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis
-
Feedback sentiment analysis Architecture for BERT The architecture diagram shows a BERT-based sentiment analysis pipeline for a Blockchain-Enabled Laboratory Asset Monitoring system. The process begins with raw text
data moving through preprocessing (cleaning, lowercasing, special token addition, and data masking), followed by WordPiece tokenization. An embedding layer creates multidimensional vectors using token, position, and segment embeddings. The BERT transformer encoder processes data through three pathways: model processing with self-attention mechanisms, training procedures, and output processing. The final stage produces binary sentiment classifications evaluated through standard metrics, creating a feedback loop that enhances laboratory operations while maintaining data security through blockchain infrastructure.
Figure 5 Feedback sentiment analysis Architecture for BERT of our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis
-
-
Advantages
-
Automated Sentiment Analysis: Utilizes advanced NLP models (BERT, RoBERTa, GPT- 2+CNN) to classify user feedback as positive or negative without manual review.
-
Immutable and Secure Data Storage: Stores all laboratory item data on a private Ethereum blockchain using smart contracts, ensuring tamper-proof and traceable records.
-
Natural Language Query Interface: Integrates Gemini API to allow users to interact with the system through simple natural language commands, improving accessibility and usability.
-
Real-Time Insights and Transparency: Provides instant feedback analysis and transparent data access across departments, reducing delays and improving decision-making.
-
Decentralized Data Auditing: Blockchain- backed architecture allows transparent, decentralized auditing of procurement and tracking activities.
-
-
-
IMPLEMENTATIONAL PHASE OF OUR BLOCKCHAIN-ENABLED WEB
APPLICATION
-
Python-based implementation
The proposed Blockchain-Enabled Web Application focused on sentiment analysis of Amazon product reviews, specifically those within the industrial and scientific category. The emotional feeling lying beneath the text piece is determined through a technique known as sentiment analysis, which in this case helps to understand customer opinions about the products. The notebook is structured to import necessary libraries, load and preprocess the dataset, visualize key aspects of the data, implement sentiment analysis models, and evaluate their performance. This approach enables businesses to gain perception for the client satisfaction and construct the decisions based on data. Numerous important Python libraries are imported by the application initially. The important libraries include panda library which performs data manipulation. Similarly, numerical operations are done by numpy library, neural networks are constructed by torch and torch.nn libraries, and pre-trained models such as BERT, RoBERTa and GPT- 2 are utilized by the transformers library. Moreover, various processes like model selection and evaluation, data visualization, and visual representations of commonly occurring words are created using sklearn (scikit-learn), matplotlib and wordcloud respectively. The python software imports OS and pickle for operating systems interactions and serialization respectively. The above library collection provides a detailed flow for analyzing sentiment, starting from data loading to the final system installation.
Different fields like images, rating, text, title, timestamp, user_id, asin, parent_asin, verified_purchase, helpful_vote, and sentiment are included within the dataset. The reviews given by customers are represented in every entry, where the sentiment column denotes the positive, negative, or neutral reviews given the customer. The details about the datasets structure are given by the notebook, which gives each columns data types and count of non-null entries. The balance among the positive, negative, and neutral reviews can be understand by visualizing the sentiments distrbution, which is the important phase of the initial data investigation.
A visual word cloud is created from review text to highlight the most common words, offering insights into prevalent themes and topics within the reviews. For sentiment analysis purposes, a data subset is prepared with sentiment labels converted to numerical values. This dataset is divided into training and validation portions to ensure proper model evaluation and robustness.
The implementation includes a custom SentimentDataset class extending torch.utils.data.Dataset, specifically designed for efficient text data processing in sentiment analysis. This class accepts text inputs, their corresponding labels, a tokenizer, and maximum sequence length
parameters. The tokenizer transforms text into model- compatible formats. The class implements standard Dataset methods: len returns the total item count, while
getitem retrieves specific dataset entries. For each text entry, the tokenizer generates input IDs and attention maskscritical components for training transformer models like BERT. Labels are converted to torch tensors. This class design simplifies data preparation, facilitating efficient batch processing during model training.
-
Implementation Modules
Here, we present the implementation modules of our blockchain-enabled web application for laboratory asset monitoring and sentiment-based feedback analysis.
-
Indenter Module
-
Register and log in to the platform.
-
Raise a purchase order and send it to the purchase department.
-
Receive email notifications when an order is ready for collection from the stores.
-
Collect the order from stores.
-
Fill out the received form for the records.
-
Log out after completing tasks.
-
-
Purchase Department Module
-
Log in to the platform.
-
View and manage incoming purchase requests.
-
Accept or reject purchase requests and forward accepted order request to suppliers.
-
Log out after completing tasks.
-
-
Supplier Module
-
Log in to the platform.
-
View purchase orders forwarded by the purchase department.
-
Accept or reject orders.
-
Forward accepted orders to the stores.
-
Receive email responses from stores on material acceptance or rejection.
-
Log out after order processing.
-
-
Stores Module
-
View order materials upon arrival.
-
Inspect the material for quality.
-
Send acceptance or rejection responses to the supplier.
-
Fill out the Goods Receipt Note (GRN) form for accepted materials.
-
Send email notifications to indenters when their orders are ready for collection.
-
Fill out the Material Issue Voucher (MIV) form upon order collection by the indenter.
-
View the ledger to track delivered and pending quantities.
-
Log out after completing tasks.
-
-
Admin Module
-
Log in to the platform.
-
View the registered indenters.
-
Manage Lab Items and interact with the database directly with user commands.
-
Add, view, or remove suppliers.
-
View the ledger data.
-
View the Feedback Analysis
-
Log out after completing administrative tasks.
-
-
-
-
RESULT AND ANALYSIS
A. Ganache GUI
Figure 6 : Ganache GUI
Figure 6 shows the screenshot of Ganache blockchain development tool interface. Ganache is a private Ethereum blockchain for expansion and testing reasons. This image shows the Ganache GUI with a local Ethereum blockchain setup, listing multiple pre-funded accounts each with 100 ETH balance. It includes account addresses, balances, transaction counts, and HD wallet information.
Figure 7: Blocks tab in Ganache
Figure 7 shows the screenshot of Blocks tab in Ganache, displaying recently mined blocks on a local Ethereum network. Each block includes details like timestamp, gas used, and transaction count, confirming successful transaction mining and block creation.
Figure 8: Transactions tab in Ganache
Figure 8 displays the screenshot of Transactions tab in Ganache, showing a list of blockchain transactions including contract calls and contract creations. Each entry includes transaction hashes, gas used, sender and receiver addresses, and the type of interaction with the smart contract.
-
Sentiment Analysis on Reviews Dataset
Figure 9 BERT Confusion Matrix
Figure 9 denotes the BERT Confusion Matrix. The breakdown of confusion matrix is given below:
-
True Negative (top-left, 135): BERT correctly predicted "negative" 135 times when the real label was "negative."
-
False Positive (top-right, 37): BERT wrongly predicted "positive" 37 times when the real label was "negative."
-
False Negative (bottom-left, 14): BERT wrongly predicted "negative" 14 times when the actual label was "positive."
-
True Positive (bottom-right, 814): BERT correctly predicted "positive" 814 times when the actual label was "positive."
Figure 10 BERT Loss Curves
Figure 10 exhibits the BERT Loss Curves. In Epoch 1, Train Loss is obtained as 0.2026 and Validation Loss is obtained as 0.1076. In Epoch 2, Train Loss is obtained as 0.0882 and Validation Loss is obtained as 0.1434. In Epoch 3, Train Loss is obtained as 0.0463 and Validation Loss is obtained as 0.1495.
Figure 11 RoBERTa Confusion Matrix
Figure 11 indicates the RoBERTa Confusion Matrix. The breakdown of confusion matrix is given below:
-
True Negative (top-left, 145): RoBERTa correctly predicted "negative" 145 times when the real label was "negative."
-
False Positive (top-right, 27): RoBERTa wrongly predicted "positive" 27 times when the real label was "negative."
-
False Negative (bottom-left, 13): RoBERTa wrongly predicted "negative" 13 times when the actual label was "positive."
-
True Positive (bottom-right, 815): RoBERTa correctly predicted "positive" 815 times when the actual label was "positive.
Figure 12 RoBERTa Loss Curves
Figure 12 gives the RoBERTa Loss Curves. In Epoch 1, Train Loss is obtained as 0.2392 and Validation Loss is obtained as 0.1356. In Epoch 2, Train Loss is obtained as 0.1033 and Validation Loss is obtained as 0.0985. In Epoch 3, Train Loss is obtained as 0.0692 and Validation Loss is obtained as 0.1301.
Figure 13 GPT-CNN Confusion Matrix
Figure 13 provides the GPT-CNN Confusion Matrix. The breakdown of confusion matrix is given below:
-
True Negative (top-left, 146): GPT-CNN correctly predicted "negative" 146 times when the actual label was "negative."
False Positive (top-right, 26): GPT-CNN wrongly predicted "positive" 26 times when the actual label was "negative."
-
-
False Negative (bottom-left, 29): GPT-CNN wrongly predicted "negative" 29 times when the actual label was "positive."
-
True Positive (bottom-right, 799): GPT-CNN correctly predicted "positive" 799 times when the actual label was "positive."
-
Figure 14 GPT-CNN Loss Curves
Figure 14 offers the GPT-CNN Loss Curves. In Epoch 1, Train Loss is obtained as 0.3793 and Validation Loss is obtained as 0.1820. In Epoch 2, Train Loss is obtained as 0.1549 and Validation Loss is obtained as 0.1571. In Epoch 3, Train Loss is obtained as 0.0971 and Validation Loss is obtained as 0.1412.
Figure 15 Sentiment Distribution Graph
Figure 15 represents the sentiment distribution graph. Both positive and negative sentiments are plotted in the graph.
Comparative study of Models
Table 2 BERT, RoBERTa and Hybrid GPT-CNN Metric Comparison
|
Metric / Epoch |
BERT |
RoBERTa |
GPT-CNN (Hybrid) |
|
True Negatives (TN) |
135 |
145 |
146 |
|
False Positives (FP) |
37 |
27 |
26 |
|
False Negatives (FN) |
14 |
13 |
29 |
|
True Positives (TP) |
814 |
815 |
799 |
|
Epoch 1 Train Loss |
0.2026 |
0.2392 |
0.3793 |
|
Epoch 1 Validation Loss |
0.1076 |
0.1356 |
0.1820 |
|
Epoch 2 Train Loss |
0.0882 |
0.1033 |
0.1549 |
|
Epoch 2 Validation Loss |
0.1434 |
0.0985 |
0.1571 |
|
Epoch 3 Train Loss |
0.0463 |
0.0692 |
0.0971 |
|
Epoch 3 Validation Loss |
0.1495 |
0.1301 |
0.1412 |
|
Train Loss (Epoch 3) |
0.0463 |
0.0692 |
0.0971 |
|
Val Loss (Epoch 3) |
0.1495 |
0.1301 |
0.1412 |
|
Accuracy |
0.95 |
0.96 |
0.94 |
|
Negative Precision |
0.91 |
0.92 |
0.83 |
|
Negative Recall |
0.78 |
0.84 |
0.85 |
|
Negative F1- Score |
0.84 |
0.88 |
0.84 |
|
Positive Precision |
0.96 |
0.97 |
0.97 |
|
Positive Recall |
0.98 |
0.98 |
0.96 |
|
Positive F1- Score |
0.97 |
0.98 |
0.97 |
|
Macro Avg Precision |
0.93 |
0.94 |
0.9 |
|
Macro Avg Recall |
0.88 |
0.91 |
0.91 |
|
Macro Avg F1-Score |
0.91 |
0.93 |
0.9 |
|
Weighted Avg Precision |
0.95 |
0.96 |
0.95 |
|
Weighted Avg Recall |
0.95 |
0.96 |
0.94 |
|
Weighted Avg F1- Score |
0.95 |
0.96 |
0.95 |
BERT, RoBERTa and Hybrid GPT-CNN Metric Comparison is given in the above Table 2. Metric values like True Positive, True Negative, False Negative and False Positive, Train Loss and Validation Loss are provided for BERT, RoBERTa and Hybrid GPT-CNN.
Now, we break down the table 2 below.
-
Training and Validation Loss
-
BERT has the lowest training loss, suggesting it fits the
training data well.
-
RoBERTa has the best validation loss, indicating better generalization.
-
GPT+CNN performs reasonably but not as well as the transformers.
-
-
Overall Accuracy
RoBERTa slightly outperforms the others in overall accuracy, showing its robustness on both training and unseen data.
-
Class-wise Performance
In case of negative class (172 samples), the summary of performance of models are follows:
-
RoBERTa achieves the best balance between precision and recall.
-
GPT+CNN trades off precision for recall it's good at catching negative cases but with more false positives.
-
BERT leans more toward precision than recall.
In case of positive class (828 samples), the summary of performance of models are follows:
-
All models perform very well, especially on the dominant positive class.
-
RoBERTa slightly edges out the others with the highest F1.
-
-
Macro and Weighted Averages
-
Macro averages show how the model treats both classes equally.
-
RoBERTa shows the best balance.
-
Weighted averages favor performance on the positive class (due to class imbalance). All models are strong here, but RoBERTa wins narrowly.
-
Summary of Results
Here, we present the strengths and weaknesses of BERT, RoBERTa and Hybrid GPT-CNN (in the below table 3) along with its summary in the below.
Table 3: Illustration of strengths and weaknesses of BERT, RoBERTa and Hybrid GPT-CNN
|
Model |
Strengths |
Weaknesses |
|
BERT |
Strong positive class performance, low train loss |
Lower recall on negative class |
|
RoBERTa |
Best overall accuracy and balance across classes |
Slightly higher training loss |
|
Hybrid GPT+CNN |
Good general performance, especially recall on minority class |
Lower precision on negative class, higher train loss |
-
The proposed GPT2+CNN model performs competitively in macro and weighted F1 scores, especially excelling in efficiency.
-
Although RoBERTa yields slightly better results, it requires significantly more training time and resources.
-
For real-time or resource-constrained deployments (like educational labs), the proposed model offers an ideal balance between accuracy and computational cost.
-
RoBERTa is the best performer overall, showing consistent results across all metrics with excellent generalization.
-
BERT is solid but shows signs of slight overfitting (very low train loss, higher val loss).
-
GPT+CNN is effective and lightweight but slightly weaker than the transformer-based models in handling imbalanced class distributions.
CONCLUSION
We employed cutting-edge transformer-based models BERT, RoBERTa, and a hybrid GPT-2+CNN architectureto classify feedback as positive or negative using the publicly available ataset from Github. Additionally, all laboratory item data was securely stored on a private Ethereum blockchain using Ganache and Solidity smart contracts. Furthermore, we have successfully integrated the Gemini API to allow users to generate queries via natural language commands, making the system highly interactive and user-friendly. The final outcome was a secure, smart laboratory management system capable of real-time sentiment analysis and tamper-proof data handling.
REFERENCES
-
A. Rizzardi, S. Sicari, and A. J. F. G. C. S. Coen-Porisini, "IoT- driven blockchain to manage the healthcare supply chain and protect medical records," vol. 161, pp. 415-431, 2024.
-
IBM, "https://www.ibm.com/think/topics/blockchain," (Accessed on 25 April, 2025), 2025.
-
M. Pournader, Y. Shi, S. Seuring, and S. L. J. I. J. o. P. R. Koh, "Blockchain applications in supply chains, transport and logistics: a systematic review of the literature," vol. 58, no. 7, pp. 2063-2081, 2020.
-
A. Hasselgren, K. Kralevska, D. Gligoroski, S. A. Pedersen, and A.
J. I. j. o. m. i. Faxvaag, "Blockchain in healthcare and health sciencesA scoping review," vol. 134, p. 104040, 2020.
-
K. Demestichas, N. Peppes, T. Alexakis, and E. J. A. S. Adamopoulou, "Blockchain in agriculture traceability systems: A review," vol. 10, no. 12, p. 4113, 2020.
-
R. Patel, M. Migliavacca, M. E. J. R. i. I. B. Oriani, and Finance, "Blockchain in banking and finance: A bibliometric review," vol. 62, p. 101718, 2022.
-
M. Reda, D. B. Kanga, T. Fatima, and M. J. P. C. S. Azouazi, "Blockchain in health supply chain management: State of art challenges and opportunities," vol. 175, pp. 706-709, 2020.
-
A. Vaswani et al., "Attention is all you need," vol. 30, 2017.
-
A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," 2018.
-
Y. Liu et al., "Roberta: A robustly optimized bert pretraining approach," 2019.
-
K. Clark, M.-T. Luong, Q. V. Le, and C. D. J. a. p. a. Manning, "Electra: Pre-training text encoders as discriminators rather than generators," 2020.
-
F. J. J. o. E. Sufi and Technology, "Advanced computational methods for news classification: A study in neural networks and CNN integrated with GPT," vol. 3, pp. 264-281, 2025.
-
T. Ehrhart, R. Troncy, D. Shapira, and B. Limoges, "Predicting Business Events from News Articles," in ISWC (Posters/Demos/Industry), 2023.
-
F. J. I. Sufi, "Addressing Data Scarcity in the Medical Domain: A GPT-Based Approach for Synthetic Data Generation and Feature Extraction," vol. 15, no. 5, p. 264, 2024.
-
F. J. I. Sufi, "Generative pre-trained transformer (GPT) in research: A systematic review on data augmentation," vol. 15, no. 2, p. 99, 2024.
-
L. J. I. J. O. C. M. Alhammad and P. Health, "The impact of laboratory automation on efficiency and accuracy in healthcare settings," vol. 11, no. 1, pp. 459-463, 2023.
-
C. Archetti, A. Montanelli, D. Finazzi, L. Caimi, and E. J. J. o. p. h.
r. Garrafa, "Clinical laboratory automation: a case study," vol. 6, no. 1, p. jphr. 2017.881, 2017.
-
D. J. K. Gill. (2024, 26.04.2025). NLP for Sentiment Analysis in Customer Feedback. Available: https://www.xenonstack.com/blog/nlp-for-sentiment-analysis
-
I. A. Omar, R. Jayaraman, M. S. Debe, K. Salah, I. Yaqoob, and M.
J. I. a. Omar, "Automating procurement contracts in the healthcare supply chain using blockchain smart contracts," vol. 9, pp. 37397- 37409, 2021.
-
. Havrylenko, "Developing a System for Automated Monitoring of the Procurement Process Using Digital Technologies and Analyzing the Results of Previous Procurements," 2023.
-
O. Denysov, N. Litvin, A. Lotariev, and V. J. I. J. o. R. Oliinyk, "Digitization of the Production Process: An Example of The Use of RFID Technologies For Modern Enterprises," vol. 5, no. 11, pp. 626-635, 2024.
-
S. Raghul, G. Jeyakumar, S. Anbuudayasankar, and T.-R. J. E. S.
w. A. Lee, "E-procurement optimization in supply chain: A dynamic approach using evolutionary algorithms," vol. 255, p. 124823, 2024.
-
D. J. C. J. J. E. Sihombing, "Enhancing Neurology Clinic Efficiency through Agile-Based Inventory Management System for Medical Supplies," vol. 13, no. 02, pp. 268-277, 2024.
-
E. Munari et al., "Cutting-edge technology and automation in the pathology laboratory," vol. 484, no. 4, pp. 555-566, 2024.
-
V. J. R. d. I. A. e. M. Tamraparani, "Applying Robotic Process Automation & AI techniques to reduce time to market for medical devices compliance & provisioning," vol. 15, no. 1, 2024.
-
E. Pelipenko, D. Ivanov, A. Dubgorn, and A. Levina, "Data-Driven Management of Medicine Provision in a Health Care Facility," in Innovations for Healthcare and Wellbeing: Digital Technologies, Ecosystems and Entrepreneurship: Springer, 2024, pp. 285-308.
-
K. J. J. Q. J. o. E. T. Prabhod and Innovations, "The Role of Artificial Intelligence in Reducing Healthcare Costs and Improving Operational Efficiency," vol. 9, no. 2, pp. 47-59, 2024.
-
S. Li, Z. Guo, and X. J. A. o. B. E. Zang, "Advancing the production of clinical medical devices through ChatGPT," vol. 52, no. 3, pp. 441-445, 2024.
-
GitHub,"https://amazon-reviews-2023.github.io/," (Accessed on 26
April, 2025), 2025.
