Estimating Yield & UC Vision
An automated scanning robot providing vineyards with improved yield predictions.
Imprecise fruit yield predictions, ranging by as much as 30%, add significant time and cost to winegrape production and processing, impacting vineyard and winery harvest planning, labour and dry goods requirements, equipment allocation, and downstream winery operations. Researchers are working on automated 3D scanning robots to provide significantly improved yield predictions earlier in the growing season, enabling producers to better plan for and execute busy harvest operations.
Integrating automated 3D scanning robots into vineyards may be a game-changer for the wine industry, offering unprecedented accuracy in yield predictions.
The Occlusion Project, a collaboration led by the University of Canterbury (UC), with viticultural expertise from Lincoln University and Plant & Food Research, is developing these robots to provide precise estimates of grape yields.
Principle Investigator and project leader Professor Richard Green (UC) said, “We have the perfect collaborators with Plant & Food Research positioned in Marlborough where 80% of Sauvignon Blanc is grown, and Lincoln being only one of two Universities with viticultural experts in New Zealand.”
Senior Scientist at Plant & Food Research, Dr Julian Theobald said, “Earlier season accurate yield estimation is crucial, being able to better predict how much fruit there is to manage later in the growing season and at harvest, will have significant financial and logistical benefits for both viticulturists and winemakers.”
Accurate predictions impact every step of production from managing growth in the vineyard to planning harvest, workforce planning and all downstream winery operations. Right now, less than a plus or minus 15% error at a block level at veraison (the onset of ripening of grapes) is considered good by industry.
Current methods are predominantly manual and rely on counting buds, inflorescences, and bunches at different times in the growing season, with bunches often hidden (or ‘occluded’) later in the season as vine canopies become denser with more and larger leaves. These methods are labour intensive and time-consuming, and because of this the sampling rate is low, typically less than 50 vines out of tens of thousands on a block.
UC Project Manager Dr Oliver Batchelor says a few iterations of the original prototype robotic unit are in action currently. The unit is equipped with multiple wide-angle cameras to capture 60 to 120 high resolution images of the vines per second. The images are taken from multiple angles at a staggering sub-millimetre register. A flash system is used to assure all images are recorded in the same lighting conditions.
Using GPS technology, the robot scans vineyard rows, capturing detailed data on each vine. Scans are taken every two to three weeks to build up data through the growing season. Initial scans map the canes without leaf cover. Subsequent scans capture leaf emergences, inflorescences (early flower structures) then flowering and fruiting.
The images are then stitched together and processed with 3D rendering technique called Gaussian splatting in conjunction with deep learning AI. This produces highly detailed 3D models, including the mapping of structures hidden behind others, such as the canes behind full leaf cover.
Deep learning algorithms rely on multiple ‘neural networks’, consisting of many layers of neurons and nodes—sometimes in the hundreds. These neural networks, inspired by early models of brain function, process data, recognize patterns, and predict outcomes based on learned information.
“The reconstruction is the most challenging part, but now that it’s functioning, the potential for the industry is immense,” Professor Green says. “We’ve solved the challenge of leaf occlusion – when leaves block the view of flowers and fruit.”
The AI reconstructions are utilised within a computer model. Plant & Food Research Senior Scientist Dr Junqi Zhu is a ‘plant modeler’. He is tasked with modelling phenology (the study of seasonal changes in plants and animals, such as flowering, migration, and breeding, in response to environmental factors like temperature and daylight), bunch numbers, berry numbers per bunch, berry mass, bunch mass, vine yield and meteorology records.
A new viticultural research facility - Te Whenua Tupu - The Living Lab, will utilise the imaging technology developed and serve as a physical twin providing reference data to continually re-parameterise and improve the computer models.
Lincoln University Department of Wine Food & Molecular Biosciences Associate Professor Dr Amber Parker leads the ground truthing work for the robotic unit. This is where the robotic data is compared with manual measurements to assess its accuracy and practical value. Dr Parker’s team are further working on flower modelling. She explains, “How we go from flowers to fruits is not well modelled. Part of the work is to understand that better.”
The project is also interested in utilising other insights it offers to benefit growers. The technology could potentially help assess vine balance, which measures how vegetation growth compares to fruit production. Traditionally, vine balance is evaluated by weighing pruned material, but automated 3D scanning could offer more precise and efficient measurements. “Being able to identify struggling vines at a very early stage will be highly beneficial to growers”, Professor Green said.
The multi-disciplinary team are working towards a commercial system for vintners. They envisage a unit that operates alone or could be attached to a tractor carrying out other work. The deep learning algorithms can then be adapted for different grape varieties or other fruits.
The five-year, $6.1 million initiative is backed by the Ministry of Business, Innovation and Employment (MBIE) Endeavour Fund. The system will significantly enhance vineyard management, improving efficiency and sustainability across the industry.