

- Open Access
- Authors : Hritambhar Ray, Pradipto Saha, Ms. Bijoyini Baghchi
- Paper ID : IJERTV14IS050033
- Volume & Issue : Volume 14, Issue 05 (May 2025)
- Published (First Online): 15-05-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Design and Development of a Broiler Mortality Removal Robot
Highlights
Hritambhar Ray, Pradipto Saha, Ms. Bijoyini Baghchi
-
A broiler mortality removal robot was successfully developed.
-
The broiler shank was the target anatomical part for detection and mortality pickup.
-
Higher light intensities improved the performance of detection and pickup performance.
-
The final success rate for picking up dead birds was 90.0% at the 1000-lux light intensity.
ABSTRACT. Grill mortality data gathering by hand is tedious, time-consuming, and unpleasant. The goal of this study was to: (1) to create a grill mortality removal robot from parts that can be bought in stores that will automatically gather dead poultry; (2) to assess deep learning models and image processing techniques for locating and recognising dead birds; and
(3) to assess the robot's detection and mortality pickup abilities in various lighting conditions. A two-finger gripper, robot arm, camera mounted on the robot's arm, and computer controller made up the robot. 64 Ross 708 broilers between the ages of 7 and 14 days old were utilised for the robot's creation and evaluation, and the robot arm was put on a table. The target anatomical part for detection and mortality detection was the grill shank. In order for the gripper to approach and position itself for precision pickup, deep learning models and image processing techniques were integrated into the vision system to provide the location and orientation of the shank of interest. We investigated light levels of 10, 20, 30, 40, 50, 60, 70, and 1000 lux. The findings showed that the deep learning model "You Only Look Once (YOLO)" V4 was better than YOLO V3 at detecting and locating shanks. The performance of the deep learning model detection, orientation identification in image processing, and final pickup were all enhanced by higher light intensities. At 1000 lux light intensity, the final success rate for picking up dead birds was 90.0%. Finally, the developed system contributes to the further development of an integrated autonomous set of solutions to improve production and resource use efficiency in commercial grill production, as well as to improve worker well-being. It is useful for automating the removal of grill mortality from commercial housing.
Keywords. Automation, Broiler, Deep learning, Image processing, Mortality, Robot arm.
practises. (Tottori et al., 1997; Schwean-Lardner et al., 2013), with a typical 7- week production cycle having a mortality rate of roughly 5%. (National Chicken Council, 2020). Farm workers spend a significant amount of time each day locating, collecting, and removing deceased birds as normal jobs. Bird body weight rapidly increases from less
than 500 g within the first two weeks to above 3000 g after week 7, thanks to continual selection of optimal genetic strains for efficient and quick growth. (Aviagen, 2019). Due to the light weights and small sizes of the deceased birds, manual mortality collection for young birds can frequently be finished during one house walk-through removal operation. However, as Modern with roughly 27 000 to 29 000 birds in a typical house, grill production systems are still being scaled up and intensified (Gates et al., 2008). In these commercial Grill mortality in production systems can be due to severe disease, metabolic issues, unfavourable environmental circumstances, and poor management
workers may strategically arrange dead birds during inspection and remove the carcasses at the end of inspection when birds increase during a production cycle (Holland, 2005), frequently needing numerous, laborious and exhausting passes through each grill house. Farmers may experience health risks from prolonged exposure to unfavourable working conditions with concentrated ammonia, dust, and odours (Carey et al., 2004), and postponing mortality removal may increase in-sector population pressure and the risk of disease spreading through direct contact or vector transmission (Dent et al., 2008). Therefore, efficient and self-sufficient solutions for grill mortality removal are required to decrease labour requirements, boost farm bi- osecurity, and promote worker wellbeing.
Robotics have recently been used to facilitate poultry production and have offered several options for eliminating the demand for manual labour in other industries. (Astill et al., 2020; Ren et al., 2020). The integrated autonomous systems used in modern robotics include components for perception, learning and reasoning, communication, and task planning and execution (Ren et al., 2020). A few forward-thinking robot businesses have attempted to automate time-consuming and laborious operations in the production of chicken. The Octopus Poultry Safe robots were created by the French robot company Octopus Biosafety (2022) for the grill sector. The robots are outfitted with laser pointers to mimic bird movement, scarifiers to scarify the litter, and environmental sensors to measure the temperature, relative humidity, and ammonia. Smaller grill robots with comparable features were created by Tibot (2022), another French robotics company. Robots in a grill house move along the litter floor as prongs pierce the litter to keep it airborne. The ChickenBoy robot was created by the Dutch poultry facility Big Dutchman (2022). It is mounted on rails above a flock and slowly moves around a house gathering data on the ambient environment (such as humidity, temperature, airspeed, and carbon dioxide) and thermographic ageing to evaluate the welfare and health of the birds. However, due to a relatively large initial investment, robots have not been widely deployed in the American broiler sector. More European robot companies are putting their efforts into developing robotic systems to help with precision control of broilers.
Additionally, robotic approaches to automating hen production have been investigated. Li (2016) designed and constructed a ro-bot platform to search for dead layers in a stair-step cage system. In order to induce live birds to move, various stimuli, such as sound, light, and vibration, were used in front of the cages. As a result, the motionless birds were marked as mortalities. Vroegindeweij et al. (2018) The robot can drive independently more than 3000 m while dodging obstacles and coping with the presence of hens in cage-free hen housing systems. It has a bent helical spring installed in front that can locate floor eggs and collect them. Chang et al. (2020) a chassis frame, an egg- picking mechanism, an egg-sorting device, an egg storage tank, an egg collection channel, and a control box were built for an egg-collecting robot. In order to provide free- range laying chickens with floor eggs, the robot was successful. Li et al. (2021a) In order to gather floor eggs as well, a deep learning-based computer vision system was paired with a robot arm, a soft end effector, and a camera. In conclusion, most poultry floor eggs in the egg industry were the topic of recent robotics research initiatives. There was only one robotic research for broiler mortality removal. Liu et al. (2021) built an integrated robot system that can identify deceased chickens, sweep them onto a conveyor belt, and store them in a commercial grill house using predetermined straight-line pathways. Despite the robot's accuracy of 95% in
gathering dead broilers, the embedded vision system's resilience and generalizability were unclear because it was designed using only 110 photos. Images of cocs and laying hens with relatively varying housing and environment conditions from the American grill business were included in the development dataset. Due to its limited mobility, the conveyor system may not be able to reach broilers in secluded regions near corners or under feeding or drinking lines, even though it may perform well to gather deceased chickens in open areas (one degree of freedom). As a result, creating a reliable grill mortality removal robot is still necessary. Our team intends to divide the robot development into robot path planning, robot arm, and ground robotic vehicle with accurate indoor navigation systems, and integrates them together once all parts are developed. This strategy is inspired by the poultry robotic team in the Netherlands (Vroegindeweij et al., 2018) and Georgia Tech Research Institute (Usher et al., 2017). In order to detect grill mortality in simulated grill production conditions, this study relies on robot arms and vision systems. For the design and development of a grill mortality removal robot, practical considerations should be made. Flexible robot arms may be able to overcome the movement limitations of the bent helical spring (Vroegindeweij et al., 2018) and collection channel (Chang et al., 2020) approaches in order to reach remote places where dead birds may arise. Additionally, to collect dead birds from broad areas connected to conventional grill house construction, robot arms can be integrated into movable platforms like ground vehicles, ceiling-mounted rail systems, and unmanned aerial vehicles. Uncertainty persists on how well robot arms will perform when picking up dead birds. For a robot to detect dead birds, a strong visual system is essential. Deep learning- based computer vision systems may be able to help with the identification of deceased birds because they are being used more and more in grill production (Li et al., 2021c). Deep learning models that are effective To correctly detect and locate deceased birds in images (Huang et al., 2017), it is important to balance processing speed and detection accuracy. Matching image processing algorithms are also being developed to offer exact orientation information for robot arms to grasp critical components.Because of the needs of the birds, light leakage, and coverage by infrastructure, light intensity vary throughout a grill house (e.g., feeder, drinker), etc. (Olanrewaju et al., 2006). For a grill mortality removal robot, a reliable method of identifying and picking up dead birds in a range of light intensities is required.
The objectives of this study were: (1) to create and build a
robot that will automatically collect dead birds out of commercially available parts.; (2) to assess deep learning models and image processing techniques for locating and recognising dead birds; and (3) examine a deceased performance of the robot for picking up and detecting birds under various controlled light intensities.
MATERIALS AND METHODS
Birds and System
The USDA-ARS Poultry Research Unit at Mississippi State developed the robot system (fig. 1). For the purpose of simulating the circumstances of a grill house floor, pine shavings were purchased from a nearby store and laid out on a table. Mounted on a table, the robot arm (Gen 3, Kinova Inc., Boisbriand, QC, Canada) was connected to a laptop for control. The laptop had 32 GB of RAM, a 9th Gen Intel Core i9-9900K processor, and an 8 GB NVIDIA GeForce RTX 2080 GPU. The robot arm has a maximum payload of 2000 g and seven degrees of freedom (DoF) of motion. A camera (Intel® RealSenseTM, Intel Corporation, Santa Clara, California) mounted on the arm before the gripper was used to take top-view pictures of dead birds. The adaptive two-finger gripper (Robotiq 2F-85, Kinova Inc., Boisbriand, QC, Canada) was installed at the end of the arm to grasp dead birds. For the robot's development, 64 Ross 708 broilers in total were used. These birds' body weights, which varied from 58.0 to 587.4 g (table 2), fell within the robot's maximum payload. The USDA-ARS Animal Care and Use Committee's requirements were followed when designing the system with mortalities from an ongoing research experiment at Mississippi State University.
Table 2. Information about broilers used for robot development.[a]
Days |
Number |
Average |
Maximum |
Minimum |
of Bird |
of Birds |
BW |
BW |
BW |
Age |
Used |
(mean±SD, g) |
(g) |
(g) |
7 |
1 |
58.0 |
||
9 |
13 |
155.4±29.7 |
197.6 |
100.0 |
14 50 462.5±52.1 587.4 352.4
[a] BW is body weight and SD is standard deviation.location (protocol 21-3 and date of approval: 10 February 2021) that led to an unequal distribution of birds of different ages. These birds were rigid from rigour mortis and had been dead for 24 to 96 hours. Following discussions with the managers of broiler farms, the lying position of deceased chickens was chosen to reflect business settings.
OVERALL DESIGN OF THE ROBOT
Initialization procedures, shank detection and localization, identification of the shank's orientation, and mortality collection made up the robot operation's overall workflow (fig. 2). Between the tibiotarsal joint (sometimes known as the "hock") and the metatarsophalangeal joint is the shank, which is unfeathered. Sections 2.4 and 3.1 detail the shank determination process. The initialization procedures involved starting the robot, turning on the camera, and obtaining top-view photos for additional analysis.
A deep learning model was used to finish the localization and detection of shanks. The best model was chosen after a comparative evaluation of the models (Section 2.5), and it was then utilised to extract shank data like shank indices, coordinates (x, y), and dimensions (width and height). The detected shank withminimum index was cropped and analyzed via image
Figure 1. Illustration of the system setup for the design and development of the robot.
Table 1. Information about the hand-mounted camera.
Parameter |
Values |
Parameter Values |
|
Model |
Intel RealSense D435 |
Ideal range (m) 0.3-3 |
|
Maximum focal length (mm) |
1080 |
Field of view () Depth 87×58 |
Maximum output resolution (pixels) Depth
RGB
1920×1080
1080×720
Frame rate (fps) Depth 90
RGB 30
Depth accuracy (%) <2 at 2 m RGB 69×42
Figure 2. Overall workflow for the broiler mortality removal robot.
processing algorithms to extract shank orientation relative to the robots position. Edge detection, line detection, saturation channel extraction, and cropping were all incorporated in the image processing. Section 2.6 presents specifics of these processing algorithms.
In order to place the gripper above the shank of interest, the
Only the speeds of the robot arm's linear and angular movements from the gripper position to the desired position of a body part (such as the shank) were used for the robot control and calibrated using equations 17. Absolute values of coordinates in various coordinate systems were of no interest.
LSx WI Xe CNV T
extracted shank coordinates were provided to the robot and
transferred from the image system to the robot arm system. 2
Based on the detected angle between the camera or gripper
(1)
orientation and the shank in a horizontal plane,the gripper then rotated to match the shank orientation. The dead bird's shank was grabbed, it was lifted up by the robot, it was relocated, and it was dropped at the storage location.
C. Coordinate transformation between the end effector and vision system
Only the coordinate system at the end of the arm was approved to create a connection between the end effector and hand-mounted camera for broilers despite the robot having seven degrees of freedom (DoF), which corresponds to seven Cartesian coordinate systems mortality
pickup. Since they worked well with the majority of deep learning object detection algorithms, just the RGB images from the Intel RealSense were used for robot control. The gripper and camera were in the ideal position, perpendicular to the litter floor, allowing the camera to take crisp top-view pictures of deceased broilers. The imaging distance between
LSy HI Ye CNV T LSz p T
ASx 0
ASy 0 For long-axis-based rotation,
90 ,if 90
T
ASz 90 180
T
,if 90
For short-axis-based rotation,
(2)
(3)
(4)
(5)
(6)
the camera and
little floor was approximately 20 cm, and the pixel-to-distance
AS
T ,if 90
(7)
conversion factor for the coordinate transformation was 38.4 pixels/cm based on manual measurement.
z 180
,if 90
T
where
LSx, LSy, and LSz = linear speeds of the robot arm in the x, y, and z axes of the robot Cartesian coordinate system, respectively,
ASx, ASy, and LAz = angular speeds of the robot arm in the x, y, and z axes of the robot Cartesian coordinate system, respectively,
(Xe, Ye) = centroid coordinates in the imagery coordinate system,
WI, HI = width and height of an image, 1,280 and 720 pixels in this case, respectively,
To better understand robot control, the geometric relationships (perpendicular vs. non-perpendicular) between the best-chosen portion for pickup and the gripper long axis were also calculated (fig. 3b). 30 measurements of each relationship were made.
Comparative Evaluation of Deep Learning Models for Shank Recognition and Localization
A group of well-known deep learning models called You
Look Only Once (YOLO) are used for real-time processing
CNV
p T
= pixel-to-distance conversion factor,
33.8 pixels/cm in this case,
= gripper height, 16.3 cm in this case,
= robot manipulation period, 2 s in this case,
= orientation of a desired body part in the imagery coordinate system,
purposes. The YOLO V3 and V4 are two recently launched
variants that enhance the original model's processing speed and detection accuracy (Redmon and Farhadi, 2018; Bochkovskiy et al., 2020). However, the best model for identifying and detecting shanks is still being researched. Although more contemporary YOLO models have been
long- or short-axis-based rotation = the gripper rotates to fit the long or short axis of a targeting body part.
Lighting Environments and Image Acquisition for Robot Development To simulate the lighting variance circumstances caused by bird age, obstruction by infrastructure, and lighting leakage, the light intensities were programmed to range from 10 to 70 lux with 10-lux intervals. The light intensities of interest were set using a volt-age transformer adjustment and a light metre (LT300, Extech Instruments, Waltham, Massachusetts) positioned at bird level and directly underneath the camera. To see if the additional lighting enhanced the performance of shank recognition, the supplemental lamp was set to a 1000 lux intensity (Fig. 1). Images were taken by the camera for each intensity at 5- second intervals while it revolved with the gripper. Variations of circumstances were produced in the dataset by this operation. The movements of one to four (1-4) birds in various postures were partially or entirely captured. Similar or identical images were eliminated. The number of pictures used for 10 lux was 1653, for 20 lux was 1825, for
30 lux was 1732, for 40 lux was 1707, for 50 lux was 1826,
for 60 lux was 1852, for 70 lux was 1725, and for 1000 lux was 1941. Inferring shanks without bias due to light intensity may be helped by the consistent distribution of the number of photos across the intensities.
Testing of the Dead Bird Pickup Performance Via Manual Robot Control
It was anticipated that a shank would be the best part to pick up, but this needed to be confirmed. Fig. 3a shows the robot being manually commanded to pick up seven different parts: the head, neck, wing, hock, shank, toe, and entire body. The benefits and drawbacks of picking up various components were contrasted.
published, they were not taken into account in this situation
because to their incompatibility with the present robot working conditions. Future iterations of the robot arm might support additional deep learning object detection models and aid in resolving the issue of incompatibility with computer environments.
The Darknet environment was used to carry out the YOLO V3 and V4. Model training took place on a free cloud server (Google Colab) equipped with a Tesla P100- PCIE- 16GB GPU. The majority of the training settings shared many characteristics between the two models, such as batch size of 64, scaled images of 416426 pixels, subdivisions of 16, momentum of 0.95, decay of 0.0005, learning rate of 0.001, maximum batches of 48000, etc. Images from Section 2.3 that were labelled for model training and evaluation, 80% for training and 20% for testing, were done so by skilled technicians. For the purpose of developing the model, bounding boxes were drawn for the relevant area and exported in YOLO format. A training loss across epochs chart was used to show the model's performance in real time. The training procedure was ended and the corresponding weights for the models were retained for later analysis and application once the loss hit a plateau without significant fluctuations.
(a) (b)
Figure 3. Illustration of mortality pickup performance testing via manual robot control: (a) body parts and (b) gripper-shank relationship.
is the angle between the shank and gripper in a horizontal plane. The camera lens is parallel with the long axis of the gripper.
Development and Optimization for Image
Processing Algorithms
The effectiveness of algorithms for image processing depends on appropriate design. Both HSV (hue, saturation, and
value) The colour channels of images are defined by the RGB (red, green, and blue), but the HSV is made to fit how people perceive light (Gonzalez and Woods, 2002). The colour of chicken shank skin was darker than that of litter (dark yellow vs. light yellow), making it more visually saturated than litter. The saturation channel (fig. 2) was chosen to create grayscale images because it may keep important aspects of concern (such as a chicken shank)
Evaluation Metrics
To evaluate the success of image processing algorithms, the effectiveness of robot pickup, and the performance of deep learning models, three sets of evaluation criteria were created. All evaluations were carried out on the local computer, and the metrics were arranged according to light intensity.
for the first three suggest better model performance. The processing speed (frame per second, fps) was calculated by
True positive
while omitting certain unneeded features (such as litter). The
A well-liked edge detection technique called the Canny algorithm is utilised to produce binary images from
Precision
Recall
True positive False positive True positive
(8)
(9)
saturation-channel images (Rong et al., 2014). It deleted unneeded pixels (such as litter and skin pixels) from the vicinity of gradient directions, discovered intensity gradient of images(such as outer borders of shanks), and decreased noises (tiny particles out of areas of interest). Due to its resistance to noise and missing data, the Hough transform was chosen to determine the orientation of the shank edge (Fernandes andOliveira, 2008). The shank edge orientation represented the
True positive False negative
F1 score 2 Precision Recall
i i
N
x 2 y y
N
i
i
2
i1
x
Precision Recall
RMSE
(10)
(11)
shank orientation with respect to the gripper or camera posi-
tion.
To achieve the best performance, the algorithms' parameters (such as kernel sizes) were fine-tuned. Using the created algorithm, sixty (60) photos with two to six shanks per image were processed for each light intensity, and the orien-
tation of shank edges among these images was measured manually for algorithm evaluation. A technician took the manual measures while seated in front of a monitor showing shank images and using a digital protractor.
where x i, i are the ith predicted coordinates in a horizontal plane, xi, yi are the ith ground truth coordinates, and N is totalnumber of measured samples.
The metrics for image processing algorithm evaluation were mainly errors between predicted orientation (Ôj) and ground truth orientation (Oj) (eq. 12). Histograms of the errors ranging from -135° to 135° with 45° intervals were also organized. Mean absolute error (MAE) was used to average the absolute errors and provide overall performance of the algorithms (eq. 13).
Testing for Final Performance
The ideal deep learning model, optimised image processing algorithms, and designed robot control algorithms were eventually implanted in the robot to recognise the position and orientation of the ideal anatomical portion picked up. The robot was then automatically operated 60 times to pick up the dead birds under each of the eight light intensities
Error j
MAE
O j O j (12)
m j1
Error
j
(13)
m
after various quantities of dead birds were placed on the litter. It was assessed how well the robot ultimately performed in terms of gathering grill mortality.
di- viding number of images processed with total processing time, and larger values suggest faster processing speed.
where is the absolute operator, Errorj is the error between the jth prediction orientation (Ôj) and ground truth orienta- tion (Oj), and m is the total number of evaluated errors. Operation time and success rate were the performance evaluation measures for the robot pickup. During each round of operation, Python tracked and reported operation time. The success rate was calculated by dividing the total number of robot operations by the number of successful cases of mortality pickup.
RESULTS
Performance of Picking Up Dead Birds Via Manual Control First, manual control of the robot was used to manually pick up birds using various body components (table 3). Despite being simple to grasp or recognise, several anatomical elements (such as the head and entire body) were physically compressed and inadequately strong to avoid tissue injury, contaminating the robot and litter. When birds were young, some parts were visually unrecognisable (such as the neck), too soft to pick up (such as the wing), or too little to pick up (such as the hock and toe). The grill shank was ultimately chosen as the best portion to be identified and picked up by the robot based on considerations of biosecurity safety and robust pickup and detection performance across different bird ages, but the detection performance in a dark environment needs improvement.
Performance of Detecting and Locating Dead Birds Via Deep Learning Models
Table 4 displays the results of the two YOLO models. For YOLO V3 and YOLO V4, respectively, the precision, recall, and F1 score ranged from 76.1% to 82.4% and 86.3% to 95.1%. YOLO V4 processed one more image per second than YOLO V3, and the RMSE was comparable between the two models. The YOLO V4 was chosen as the deep learning model for finding and detecting chicken shanks due to its improved detection performance and quick processing time.
Table 5 further details the performance of YOLO V4 over the eight light intensities. When intensities climbed from 10 lux to 30 lux, the precision, recall, and F1 score increased 17.7-23%, while they only increased 1.8% to 4.9% from
Figure 4. Pickup performance of the robot under two geometric rela- tionships.
Table 3. Major advantages and disadvantages for pickup of different broiler body parts via manual robot control.
Part
Picked Up Major Advantage Major Disadvantage
Head Neck Wing Hock Shank Toe
Whole body
Visually identifiable Readily picked up for old birds
Large for pickup Firm for pickup
Firm for pickup and visually identifiable Firm for picking
Easy pickup for young birds
Physically compressed and insufficiently strong for pickup Not visually obvious for young birds
Feathers being torn off Excessive feather coverage
Color features being similar to that of litter in dark environments
Too small, and the color features similar to that of litter in dark environments Too large for picking up old birds, and the integrity of body
of very young birds being compromised
Figure 4 shows how well the robot performed when picking up dead birds using two different relationships between the gripper's long axis and the grill shank. The perpendicular relationship had a success percentage of 93.3%, whereas the non-perpendicular relationship had an 86.6% success rate. The former had a success rate that was 6.7% higher than the later, demonstrating the need of using image processing techniques to extract shank orientation in order to make the gripper fit perpendicularly to the desired grill shank.
Figure 4. Pickup performance of the robot under two geometric rela- tionships.
Table 4. Model performance for detecting and locating broiler shank via two deep learning models.[a]
Precision |
Model performance F1 Recall Score RMSE |
Processing Speed |
|||
Model |
(%) |
(%) |
(%) |
(mm) |
(fps) |
YOLO V3 |
82.4 |
76.1 |
79.1 |
4.5 |
6 |
YOLO V4 |
95.1 |
86.3 |
90.5 |
4.8 |
7 |
fps is frame per second.
IJERTV14IS050033
30 to 1000 lux. The RMSE ranged from 4.5 to 5.3 mm among the eight light intensities.
Performance of Detecting Shank Orientation Via Image Processing
Figure 5 shows the discrepancies between anticipated orientations and ground truth orientations. For 10 lux, 83.1% for 20 lux, 81.7% for 30 lux, 89.9% for 40 lux,
82.7% for 50 lux, 86.6% for 60 lux, 89.2% for 70 lux, and 97.6% for 1000 lux, the percentage of errors within 45° (showing minimal variance) was 57.2%. The MAE had the largest (38.7°) and smallest (12.5°) radii at 10 and 1000 lux, respectively.
Final Performance of The Broiler Mortality Removal Robot
The final performance of the robot is presented in figure
6. With light intensities rising from 10 to 30 lux, the success rate for automatically locating and removing deceased birds went from 53.3% to 80.0%. Success rates stabilised at 86.7% to 90.0% between 60, 70, and 1000 lux, with a minor improvement of 6.7% after raising the light intensity further from 30 to 60 lux. The total operation duraion varied between 70.5 and 77.8 s/round.
(This work is licensed under a Creative Commons Attribution 4.0 International License.)
DISCUSSION
Part Orientation
The so-called pick-and-place robots category, which the grill mortality removal robot is a member of, requires precise location and orientation data for control (Zeng et al., 2018). Precise localization of the chicken shank is challenging even with manual control and the best perpendicular shank-gripper orientation, leading to missed alignments, unsuccessful gripping, and subpar pickup performance. In contrast to manual control, the robot automatically pin- pointed shanks with an RMSE of 4.5 to
5.3 mm, indicating
Table 5. Model performance for detecting and locating broiler shank across eight light intensities.[a]
Light Model Performance
intensity (lux) |
Precision (%) |
Recall (%) |
F1 Score (%) |
RMSE (mm) |
10 |
74.5 |
69.4 |
71.8 |
4.5 |
20 |
86.1 |
72.2 |
78.6 |
4.8 |
30 |
97.5 |
87.1 |
92.0 |
4.9 |
40 |
98.8 |
88.5 |
93.4 |
5.0 |
50 |
98.9 |
88.6 |
93.5 |
4.9 |
60 |
98.9 |
92.5 |
95.6 |
4.1 |
70 |
99.0 |
93.3 |
96.0 |
5.1 |
1000 |
99.3 |
92.0 |
95.8 |
5.3 |
<1% deviation within an image, once the shanks were de- tected by the model, thus improving robot pickup perfor- mance. Interestingly, despite the fact that the grill shanks weren't parallel to the gripper long axis, the robot nevertheless performed well while picking up food (the success rate was 86.6%). According to our observations, the robot was only unable to successfully pick up objects when the gripper was parallel to or barely intersected (intersecting angle of 15°) with the shanks. The robot may succeed because bird pickup was not very sensitive to other
interactions between grip- per long axis and grill shank pick up dead birds despite their shank orientation not being exactly detected (fig. 5).
DEEP LEARNING MODEL
Different deep learning models perform differently in terms of processing speed and detection accuracy (Huang et al., 2017). The YOLO V4 exhibited processing speeds that were 16.7% faster than the YOLO V3 and had precision, recall, and F1 scores that ranged from 13.3% to 15.4% higher. These results were in line with those reported by Bochkovskiy et al. (2020). The improved
Figure 5. Histogram of errors between prediction and ground truth and mean absolute errors for different light intensities.
Figure 6. Success rates for automatically picking up dead birds under different light intensities.
performance of YOLO V4 may be attributed to the advanced connection scheme (e.g., weighted residual connections), universal normalization strategy (e.g., cross mini-batch nor- malization), upgraded training hyperparameter (e.g., self-ad- versarial training), etc. So, in this instance, the YOLO V4 was chosen as the shank detector. However, although being well-suited for real-time processing, the YOLO V4 architecture was not strong enough to hold onto big fluctuations, which led to it missing shanks in some extreme settings (such 10-lux light intensity) and weakened recall. Future research should focus on more complex models (such region-based convolutional neural networks; Li et al., 2021b) and associated robot computing hardware.
Image Processing
A number of factors, such as accurate deep learning detection results, appropriate algorithm design (e.g., saturation channel extraction, Canny edge detection, and Hough transform), and parameter tuning, contributed to the image processing algorithms' acceptable performance (i.e., MAE was 12.5° at 1000 lux). Similar to this, our earlier article (Li et al., 2021c) showed that it was feasible to constantly follow bird behaviour when image processing was involved by thoroughly analysing the bounding box findings from deep learning detection. The receptive field of the image processing algorithms was restricted by the discovered bounding box, and the detected environments proved to be straightforward in comparison to the entire image. For the development of image processing algorithms, some error-causing elements cannot be disregarded, such as bird occlusion and overlapping, different lighting conditions, a complex background, unpredictable shadows, etc. (Okinda et al., 2020).
Light Intensity Effect
Lower light intensity resulted in decreased deep learning model recognition, orientation identification in image processing, and ultimately, pickup performance. This reduction made it impossible to distinguish between grill shanks and other items in dimly lit areas. Previous research indicated that deep learning models were not affected by the brightness of the light when used to detect eggs (Li et al., 2020). Features (e.g., color, texture, saturation, etc.) of broiler shanks were different from those of eggs, and broilershanks were not visually recognizable under the 10-lux light intensity (fig. 3), thus leading to detection failure. Such a phenomenon indicates that embedding extra lamps into the robot for illuminating detection environments may be cost- efficient and helpful to boost mortality detection and pickup performance in broiler production with dim lights (Olanrewaju et al., 2006). To preserve energy for robot operation, it is not advised to unnecessarily increase light intensities in order to improve detection and pickup performance because the advantages of soft lights tend to be saturated at intensities over 60 lux. The impact that extremely bright lights have on living birds is another problem. In a typical grill operation, there can be noticeablepanic and stress reactions to extremely intense lights (Olanrewaju et al., 2006). Another choice would be infrared or near-infrared imaging, which is unaffected by light levels and takes pictures of objects based on their surface temperatures (Yanagi et al., 2002). It is recommended that the imaging using the infrared approach be finished before the skulls attain the same temperature as the background. In the meantime, producers' cost effectiveness and economic adaptation should be taken into account.
Bird Size Effect
Due to an uneven distribution of dead birds across bird ages, the effect of bird size on detection and pickup performance was not systematically evaluated; however, it cannot be disregarded for future robot development. Younger birds' shank skin colours were seen to be less saturated (lighter yellow), which may make it harder to tell shanks apart from new litter due to comparable colour characteristics. The detection of immature bird shanks requires the development of additional algorithms. Due to the higher shank size and constrained field of view of the vision system, older birds may not have their full shanks captured by the robot. The camera could be raised to catch more scenarios, and the pixel-to-mm conversion factors should be adjusted to the camera height. After five weeks of age, a typical bird's body weight will exceed 2000 g (the robot's maximum payload) (Aviagen, 2019), hence further research into the robot's ability to hold birds of various weights, sizes, and ages is necessary.
Other Causes of Mortality Pickup Failure
Other pickup failure causes, in addition to the ones listed above, should not be disregarded. When it encountered difficult impediments, the ro-bot's gripper was adaptively distorted, changing from its regular shape to a twisted shape. Despite being advantageous for keeping the gripper from damage, the collision-induced twisting of the gripper rendered it unable of successfully grasping grill shanks. Having closely curled shanks (left illustration), overlapping birds (centre illustration), and litter covering the shank of interest (right illustration) provide obstacles, as shown in Figure 7. It should be noted that the instances were chosen and randomly assigned in a lab setting after consulting withmanagers of broiler farms to mimic real-world situations, and no particular frequency or proportion of the cases was given. It is necessary to create a more dependable and strong gripper that can withstand contact force without deforming.
Occlusion by other shanks Bird overlapping Occlusion by litter
Figure 7. Sample cases of pickup failure. In the right image, the yellow part represents the chicken leg occluded by litter.
FUTURE WORK
Complex processes are required to develop a reliable broiler mortality robot that can be autonomously controlled in commercial broilers. Through this work, a robot arm for
picking up grill mortality that would be mounted on a
Declaration of Competing Interest
The authors affirm that they have no known financial or interpersonal conflicts that would have appeared to have an impact on the research described in this publication.
mobile unmanned ground vehicle was developed. In order ACKNOWLEDGMENTS
to find grill fatalities, the search team also created The USDA Agricultural Research Service Cooperative navigation algorithms for a ground robot that was free to Agreement with the number 58-6064- 015 provided funding travel throughout a grill house. To aid in sensing and for this study. The experimental assistance and facility upkeep navigation in a GPS-denied grill house environment, three- provided by Mississippi State University undergrad assistants dimensional machine vision sensors (such as LiDAR) are and USDA technicians was much appreciated by the authors. planned to be installed on the ground robot. Research
should also be done on properly integrating all necessary components, including detection, navigation, servo, and pickup. The camera and end effector were set in place at a stable height (around 20 cm). The acquire image featured very little distortion that would have affected hand-eye coordination because the camera was positioned closely to the ground and perpendicular to the ground. Thus, our straightforward coordinate transformation strategy was successful. However, for applications with significantly distorted views contained in acquired images, additional calibration techniques [such as positioning a checkerboard in front of a robotic vision system and creating homogeneous transformation matrices (Tabb and Ahmad Yousef, 2017)] are required.
IJERTV14IS050033 (This work is licensed under a Creative Commons Attribution 4.0 International License.)
CONCLUSION
To automatically collect dead birds, a robot for removing grill mortality has been created. A perpendicular orientation between the gripper long axis and the grill shank led to higher success rates than a non-perpendicular orientation when the grill shank was the target anatomical portion for detection and mortality pickup. The YOLO V4 outperforms the YOLO V3 for the detection and localization of grill shanks in terms of precision, recall, F1 score, and processing speed. The detection performance of the deep learning model, orientation identification during image processing, and final pickup performance were all affected by lower light intensities. At 1000 lux light intensity, the final success rate for picking up dead birds was 90.0%.
REFERENCES
Astill, J., Dara, R. A., Fraser, E. D. G., Roberts, B., & Sharif, S. (2020). Smart poultry management: Smart sensors, big data, and the internet of things. Comput. Electron. Agric., 170, 105291. https://doi.org/10.1016/
j.compag.2020.105291 Aviagen. (2022). ROSS 708: Performance objectives. Retrieved from https://en.aviagen.com/assets/Tech_Center/ Ross_Broiler/Rossx Ross708-
BroilerPerformanceObjectives2022-EN.pdf
Big Dutchman. (2022). ChickenBoy: Higher productivity and increased welfare with an autonomous robot and artificial intelligence. 2022. Retrieved from https://www.bigdutchman.com/en/poultry- growing/products/detail/chickenboy/
Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). YOLOv4: Optimal speed and accuracy of object detection. arXiv:2004.10934, 1-17. https://arxiv.org/ abs/2004.10934v1 Carey, J. B., Lacey, R. E., & Mukhtar, S. (2004). A review of literature concerning odors, ammonia, and dust from broiler production facilities: 2. Flock and
house management factors. J. Appl. Poult. Res., 13(3), 509-513. https://doi.org/10.1093/japr/13.3.509 Chang, C.-L., Xie, B.-X., & Wang, C.-H. (2020).
Visual guidanceand egg collection scheme for a smart poultry robot for free- range farms. Sensors, 20(22), 6624. https://doi.org/10.3390/s20226624
Dent, J. E., Kao, R. R., Kiss, I. Z., Hyder, K., & Arnold,
M. (2008).
Contact structures in the poultry industry in Great Britain: Exploring transmission routes for a potential avian influenza virus epidemic. BMC Veterinary Research, 4(1), 27. https://doi.org/10.1186/1746-6148-4-27 Fernandes, L. A. F., & Oliveira, M. M. (2008). Real-time line detection through an improved Hough transform voting scheme. Pattern Recognit., 41(1),
299-314. https://doi.org/10.1016/j.patcog.2007.04.003
Gates, R. S., Casey, K. D., Wheeler, E. F., Xin, H., & Pescatore, A.
J. (2008). U.S. broiler housing ammonia emissions inventory. Atmos. Environ., 42(14), 3342-3350.
https://doi.org/10.1016/j.atmosenv.2007.06.057
Gonzalez, R. C., & Woods, R. E. (2002). Digital image processing (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
Holland, W. C. (2005). US6939218B1: Method and apparatus of removing dead poultry from a poultry house. Google Patents.
Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A.,… Guadarrama, S. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (pp. 7310-7311). Piscataway, NJ: IEEE. http://doi.org/10.1109/CVPR.2017.351
Li, G., Chesser, G. D., Huang, Y., Zhao, Y., & Purswell, J. L. (2021a). Development and optimization of a deep-learning- based egg-collecting robot. Trans. ASABE, 64(5), 1659-1669.
https://doi.org/10.13031/trans.14642
Li, G., Huang, Y., Chen, Z., Chesser, G. D., Purswell, J. L., Linhoss, J., & Zhao, Y. (2021b). Practices and applications of convolutional neural network- based computer vision systems in animal farming: A review. Sensors, 21(4), 1492. https://doi.org/10.3390/s21041492
Li, G., Hui, X., Chen, Z., Chesser, G. D., & Zhao, Y. (2021c). Development and evaluation of a method to detect broilers continuously walking around feeder as an indication of restricted feeding behaviors. Comput. Electron. Agric., 181, 105982.
https://doi.org/10.1016/j.compag.2020.105982
Li, G., Xu, Y., Zhao, Y., Du, Q., & Huang, Y. (2020). Evaluating convolutional neural networks for cage-free floor egg detection. Sensors, 20(2), 332. https://doi.org/10.3390/s20020332
Li, T. (2016). Study on caged layer heath behavior monitoring robot system. PhD diss. China Agriculture University, College of Engineering.
Liu, H.-W., Chen, C.-H., Tsai, Y.-C., Hsieh, K.-W., & Lin, H.-T.
(2021). Identifying images of dead chickens with a chicken removal system integrated with a deep learning algorithm. Sensors, 21(11). https://doi.org/10.3390/s21113579
National Chicken Council. (2020). U.S. broiler performance.
Retrieved from https://www.nationalchickencouncil.org/about- the- industry/statistics/u-s-broiler-performance/
Octopus Biosafety. (2022). The XO solution for efficient and responsible poultry farming. 2022(August). Retrieved from https://www.octopusbiosafety.com/en/animal-welfare/
Okinda, C., Nyalala, I., Korohou, T., Okinda, C., Wang, J., Achieng, T.,… Shen, M. (2020). A review on computer vision systems in monitoring of poultry: A welfare perspective. Artif. Intell. Agric., 4, 184-208.
https://doi.org/10.1016/j.aiia.2020.09.002
Olanrewaju, H. A., Thaxton, J. P., Dozier, W. A., Purswell, J.,Roush,
W. B., & Branton, S. L. (2006). A review of lighting programs for broiler production. Int. J. Poult. Sci., 5(4), 301-308. https://doi.org/10.3923/ IJPS.2006.301.308
Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv:1804.02767, 1-6. https://arxiv.org/
abs/1804.02767v1
Ren, G., Lin, T., Ying, Y., Chowdhary, G., & Ting, K. C. (2020). Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric., 169, 105216. https://doi.org/10.1016/ j.compag.2020.105216
Rong, W., Li, Z., Zhang, W., & Sun, L. (2014). An improved CANNY edge detection algorithm. Proc. 2014 IEEE Int. Conf. on Mechatronics and Automation (pp. 577-582). Piscataway, NJ: IEEE. https://doi.org/10.1109/ ICMA.2014.6885761
Schwean-Lardner, K., Fancher, B. I., Gomis, S., Van Kessel, A., Dalal, S., & Classen, H. L. (2013). Effect of day length on cause of mortality, leg health, and ocular health in broilers. Poult. Sci., 92(1), 1-11. https://doi.org/10.3382/ps.2011-01967 Tabb, A., & Ahmad Yousef, K. M. (2017). Solving the robot- world hand-eye(s) calibration problem with iterative methods. Mach. Vis. Appl., 28, 569-590. https://doi.org/10.1007/ s00138-017-
0841-7
Tibot. (2022). Broiler farming-Spoutnic NAV. Retrieved from https://www.tibot.fr/elevage/robot-poulet-chair/
Tottori, J., Yamaguchi, R., Murakawa, Y., Sato, M., Uchida, K., & Tateyama, S. (1997). The use of feed restriction for mortality control of chickens in broiler farms. Avian Dis., 41(2),
433-437. https://doi.org/10.2307/1592200
Usher, C. T., Daley, W. D., Joffe, B. P., & Muni, A. (2017).
Robotics for poultry house management. ASABE Paper No.1701103. St. Joseph, MI: ASABE.
https://doi.org/10.13031/aim.201701103
Vroegindeweij, B. A., Blaauw, S. K., IJsselmuiden, J. M., & vanHenten, E.
J. (2018). Evaluation of the performance of PoultryBot, an autonomous mobile robotic platform for poultry houses. Biosyst. Eng., 174, 295-315. https://doi.org/10.1016/j.biosystemseng.2018.07.015 Yanagi, T., Xin, H., & Gates, R. S. (2002). A research facility for studying poultry responses to heat stress and its relief. Appl. Eng. Agric., 18(2), 255-260. https://doi.org/10.13031/2013.7787 Zeng, A., Song, S., Yu, K.-T., Donlon, E., Hogan, F. R., Bauza, M.,… Rodriguez, A. (2018). Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross- domain image matching. Proc. 2018 IEEE Int. Conf. on Robotics and Automation (ICRA) (pp. 3750-3757). Piscataway, NJ: IEEE. https://doi.org/10.1109/ICRA.2018.8461044