Recently CVision AI completed work on an underwater stereo camera we called ShoalSight and an algorithm assisted workflow that was implemented in Tator. This work was funded by a grant from the Massachusetts Department of Marine Fisheries and performed in collaboration with UMass Dartmouth's School for Marine Science and Technology (SMAST). SMAST conducts stock assessment surveys using video data captured from within an open trawling net, reducing the ecological impact and time to conduct the survey. The goal of our work was to both improve the quality of the video data through hardware upgrades and to reduce the time to analyze the video through increased algorithm assistance. The image below shows Tator being used for a monoscopic workflow (A) and the codend setup (B and C) for the SMAST video trawl survey. Stereo cameras were introduced to allow for length measurements as well as counting and classification.
20 posts tagged with "tator"
View All TagsNew features in Tator 1.3
Tator 1.3 introduces a range of exciting new features designed to enhance the user experience, along with crucial dependency updates, performance enhancements, and bug fixes. One of the standout features is the nested folders functionality, allowing users to organize their media files in a way that mimics a traditional filesystem. This update also includes a refreshed project detail view.
A new metadata export view offers users granular control over exporting metadata in CSV format, providing greater flexibility and precision. Additionally, the mark-based versioning system lays the groundwork for tracking every change to every piece of metadata in Tator. A user interface for interacting with this historical data is set to be introduced in Tator 1.4.
For applet developers, Tator 1.3 simplifies the deployment and management process with the introduction of hosted templates, making it more convenient than ever to develop and maintain applets.
Canvas Applets Showcase
Canvas Applets in Tator 1.2
We are pleased to announce the release of a new and powerful feature in Tator 1.2 called Canvas Applets. With canvas applets, developers can seamlessly integrate custom annotation experiences in Tator. In the video below, we walk you through two annotation experiences developed at CVision: one using Meta's SAM algorithm to quickly make segmentation masks, and another used by scientists at NOAA to estimate percent coverage of biological and geographical features. Visit our tutorial to learn how to make canvas applets of your own.
Computer Vision for Fisheries
This blog post is the initial entry in a series discussing a computer vision project, conducted jointly by the Oregon Department of Fish and Wildlife (ODFW), The Environmental Defense Fund, and CVision AI. This first entry is the Executive Summary of the project intended for a business audience. For a more technical overview, including the code used on the project, please reference the lengthier article found here. Spearheading this project are the two authors of this blog: Adam Ansari and Varun Hande. Varun Hande is a data science graduate student from the University of San Francisco. He is a Machine Learning Research Scientist at the Environmental Defense Fund, where he works on improving the SmartPass project in collaboration with CVision AI. Adam Ansari has a similar background as both a Data Science graduate student at the University of San Francisco and a Machine Learning Research Scientist at the Environmental Defense Fund. Together they formed the two Data Scientists on the team who pushed the science on this project to the level demonstrated in this blog.
Tator OSS and Tator Enterprise
As we continue to develop Tator, our web platform for video annotation and analysis, we have identified two distinct user groups. The first group consists of Open Source Software (OSS) Users who are typically small teams or individual researchers looking to use Tator for a single project or field work. They prefer to install Tator on a single on-premise machine, without relying on cloud services. They have limited data, typically less than 10TB, and require access for a small team of 1-10 users. On the other hand, Enterprise Users are medium to large organizations that require high availability and data durability, with scalability and security being their top concerns. They are interested in using the cloud and require access to large amounts of data, often in the range of tens or hundreds of TB.
As developers, we recognize the importance of serving both types of users, and have endeavored to do so with a single open source codebase. However, we have found that our current codebase is not optimal for either group. For OSS Users, Tator is difficult to install and configure due to its reliance on Kubernetes. Despite the development of our install script and support for microk8s, OSS Users often experience issues related to container networking, particularly with DNS, firewalls, and proxies. Meanwhile, for Enterprise Users, the value proposition of a Tator Enterprise Subscription is unclear since Tator is entirely open source.
To address these issues, our repository will be split into two separate repositories, Tator OSS and Tator Enterprise, starting with release 1.1.0, which is our next big milestone release.
Tator v1.0.0 has landed
Introducing v1.0.0
We are excited to announce the latest update to our web-based software platform, Tator v1.0.0, which marks a significant milestone for the product. This release brings about changes at both the architectural layer and API level, providing a rock-solid foundation for future iterations of the platform. In addition to laying the groundwork for future features, Tator v1.0.0 brings a plethora of bug fixes, UI consistency improvements, and quality of life enhancements that we believe our users will appreciate.
One of the most noteworthy changes in this update is the removal of the Elasticsearch subsystem, which caused significant ripples throughout the API. Although Elasticsearch and PostgreSQL can complement each other, managing their integration can lead to challenges related to data consistency and maintenance. By utilizing structured metadata in PostgreSQL, we can achieve our search and analytics requirements without the need for Elasticsearch, reducing maintenance costs and improving scalability.
Migrating a Kubernetes deployment to Docker Compose
This blog post will go over the procedure for migrating a Tator deployment based on Kubernetes (pre-1.1.0) to Docker Compose (1.1.0 and later). Because all dependencies use the same Docker images as version 1.0.0, this migration procedure is fairly simple and consists of simply pointing the new compose deployment to the directory that was previously in use.
Upgrading to Tator 1.0.0
This blog post will go over the procedure for upgrading a Tator deployment to the upcoming major release,
1.0.0
, with most features recently merged with the main
branch. This release requires an upgrade to Postgresql 14 and drops the Elasticsearch dependency for standard queries (although it is still used for storing logs). Deprecated bucket definition formats are also removed in this release. This post assumes that the tator deployment is currently running version 0.2.23
, was installed with the install script, and is running on microk8s
.
Converting Bucket Configurations for Tator 1.0.0
What changed
In the 1.0.0 release of Tator, the fields related to default upload, live, and backup buckets in the
values.yaml
configuration file have changed to match the changes from the 0.2.22 release. This is
a breaking change and your tator deployment will be broken if your deployment administrator does not
follow these instructions during the upgrade to 1.0.0.
Updating Default Bucket Configurations for Tator 1.0.0
What changed
In the 0.2.22 release of Tator, support for OCI Object Storage as a bucket was added. This required a refactor of how bucket configurations are stored by Tator, which is not backwards compatible. The 0.2.22 release deprecates the existing method of creating and updating buckets, but still allows for buckets created this way to function. Release 1.0.0 will deprecate the functionality of buckets created this way, but there is a utility that will assist your migration from the old configuration to the new one.