About CoralNet

CoralNet is a repository and a resource for benthic images analysis. It was created as part of an NSF project, Computer Vision Coral Ecology by researchers at UCSD in 2012. The site implements sophisticated computer vision algorithms that allow researchers, agencies, and private partners to rapidly annotate benthic survey images. The site also serves as a repository and collaboration platform for the scientific community.

Background

Global warming and local anthropogenic stressors are causing severe stress to coral reefs across the world. To take appropriate action decision makers need accurate data over large spatio-temporal scales. The speed of data collection has increased tremendously in recent years, thanks to proliferation of digital cameras and underwater autonomous, or remotely operated, vehicles. As a result, millions of images are collected each years across the world. The subsequent analysis, however, remains painfully slow as manual inspection of each photo is required by a trained expert.

Recent progress in machine learning and computer vision offer a compelling solution to this problem: a machine-annotator can learn from human labels and then proceed to rapidly process large numbers of new images.

CoralNet offers an open-source framework to approach this solution. It serves as a repository of benthic images, and allow for different modes of interaction between the machine-annotator and the human expert. CoralNet, by it’s nature, also provides a platform for collaboration / sharing of data.

Data Agreement

CoralNet are committed to protecting the data privacy of it’s users, and offer two privacy options which are set of the Source level.

Private

All your images and annotations are hidden from the public. Your project will however appear in the source listing and map, so that other researchers / interested parties can find you. The only information we will share is thus:

1. Name of source

2. Source description and affiliation

3. Latitude and Longitude of your sites

4. Total number of images in your source

All other information remains hidden from the public. CoralNet also reserves the right to let the computer vision algorithm learn from annotation made on private sources. This will in no way compromise the integrity of your data, but it will help further boost the efficiency of the machine-annotator for all users. Note: According to the US National Science Foundation guidelines on data policy, all scientific information should be made publicly available two years after collection. We therefore encourage users to switch their privacy setting from Private to Public when appropriate.

Public

In addition to the information shared in the private mode, all your source images and annotation data will be available for the public to browse and download (including original images in full resolution). We encourage all users to consider this privacy option.

The site

The CoralNet website consist of several modules:

Source creation: Your “source” is your space at CoralNet. You specify what labelset you want to use, your privacy settings, and invite collaborators to your source. The source interface also allows you to organize your images by a number of locationkeys, such as “site”, “habitat”, “depth”.

Labelset: Specify what labels you want to use in your analysis. Choose from a set of already created labels or create your own if needed.

Import: Upload images to the server. You can also import annotation already completed in other systems such as Coral Point Count.

Annotation: Annotate your image right in the web browser using a point count interface. CoralNet supports manual annotation through an interface similar to the popular Coral Point Count with Excel Extensions. Several users can annotate the same set of images at the same time, and the system tracks who annotated what. Each night at 3.03 AM PST CoralNet will read all annotations provided by the users of each source, and (if new annotations have been provided) train an automated classifier as detailed in the paper below. This classifier, or 'robot' as we call it, will then provided annotations for the remaining images in your source. Note that the robot is specific to your source meaning that it only uses images and annotation from your source for the training. Accuracy and confusion of the robot can be viewed on the source main page. The robot annotations can then be accepted or corrected by the users during the next annotation session. As more annotations are provided, new versions of the robot will be trained that (hopefully) achieve higher and higher accuracy. Users may choose whether to proceed in this semi-automated annotation mode (accepting or correcting the robot's suggestions), until all images are annotated, or to stop at any point and export the robot annotations for the remaining images along with the user provided annotations.

The computer vision algorithm used by CoralNet was developed in the following publication: “Automatic Annotation of Coral Reef Survey Images”, Beijbom, O. and Edmunds, P.J. and Kline, D.I. and Mitchell, B.G. and Kriegman, D., Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, Rhode Island 2012.

Statistics: Get statistics detailing the coverage of each category in your labelset along with aggregated functional group level statistics.

Export: Export all images and annotations to your desktop for backup or further processing.

Funding support

CoralNet development is support by the NSF Computer Vision Coral Ecology grant #ATM-0941760. We ask that you cite the following paper if you have used CoralNet in your work

Beijbom O., Edmunds P.J., Kline D.I., Mitchell G.B.,Kriegman D., "Automated Annotation of Coral Reef Survey Images", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island, June, 2012.

Contact

Contact us by filling out this form.

Affiliation

Development team

Oscar Beijbom - Project Manager

Previously lead developer of the Hovding invisible bicycle helmet, Oscar is now a PhD candidate at the Computer Science & Engineering Department at the University of California, San Diego. His research focuses on automating the scientific analysis of reef survey images, using elements from texture recognition, color analysis, and machine learning. http://vision.ucsd.edu/~beijbom/website/

Stephen Chan - Lead Developer

Stephen is a Masters student at the Computer Science department of the University of California, San Diego. His interests include user interfaces, websites, and game programming.

Devang Sampat

Devang received his B.S. in Computer Science from the University of California, San Diego in 2013. He worked on CoralNet in the early days while at UCSD. He is currently a Software Developer at Hulu.

Andrew Hu

Andrew is a third-year Computer Science undergratuate at the University of California, San Diego.

Jeff Sandvik

Jeff is a fourth-year Computer Science undergraduate at University of California, San Diego.

Advisory team

David Kriegman

In September 2002, David joined the Computer Science & Engineering Department at the University of California, San Diego. Previously, he was on the faculty at Computer Science Department and Beckman Institute at the University of Illinois at Urbana-Champaign and the Center for Computational Vision and Control and Electrical Engineering Department at Yale University. http://cseweb.ucsd.edu/~kriegman/

Serge Belongie

Serge Belongie received the B.S. degree (with honor) in Electrical Engineering from the California Institute of Technology in 1995 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences (EECS) at U.C. Berkeley in 1997 and 2000, respectively. While at Berkeley, his research was supported by a National Science Foundation Graduate Research Fellowship. He is also a co-founder of Digital Persona, Inc., and the principal architect of the Digital Persona fingerprint recognition algorithm. He is currently a Professor in the Computer Science and Engineering Department at U.C. San Diego. http://cseweb.ucsd.edu/~sjb/

David Kline

David is a coral reef biologist at the Scripps Institution of Oceanography who studies the fate of coral reefs in a high CO2 future on molecular to ecosystem scales. In particular he collaborates with computer vision scientists, engineers, chemists and physiologists to develop new techniques for studying the impact ofclimate change on coral reefs. http://scrippsscholars.ucsd.edu/dkline/biocv

Tali Treibitz

Tali Treibitz received her BA degree in computer science and her PhD in Electrical Engineering, from the Technion-Israel Institute of Technology in 2001 and 2010, respectively. Between 2010-2013 she has been a post-doctoral researcher in the department of computer science and engineering, in the University of California, San Diego and in the Marine Physical Lab in Scripps Institution of Oceanography. She is currently heading the marine sensing lab in the school of Marine Sciences in the University of Haifa. http://vision.ucsd.edu/~tali/

Ben Neal - Graduate Student

After finishing his PhD at the Scripps Institution of Oceanography, investigating how corals respond to high temperature water stress connected to global warming, Ben is now a postdoctoral researcher with the Catlin Seaview Survey, based at the University of Queensland Global Change Institute http://catlinseaviewsurvey.com/. CoralNet is essential to this project for processing the largest volume of coral reef benthic imagery ever collected.

Gregory Mitchell

Dr. B. Greg Mitchell is a Research Biologist and Senior Lecturer at the University of California San Diego, Scripps Institution of Oceanography (SIO). http://spg.ucsd.edu/People/Greg/