top of page
Search

November Week 4

  • Writer: Paul Flowers
    Paul Flowers
  • Nov 27, 2017
  • 1 min read

The group has been testing the different thresholding methods on different image masks , and have been trying to decide which method works best. To do so, we thought it was necessary to develop an evaluation metric for our images. First, we plan on using the DRIVE database for our groundtruth as before. So, whatever images we test, we can compare them to the DRIVE database’s groundtruth for the blood vessel detection. While there’s no true 100% accuracy, we are hoping to see improvement in the accuracy of our reproduced images. Following this week, we will look at and compare different resulting masks for local, adaptthresh, and global thresholding methods to the groundtruth provided to give ourselves an adequate evaluation metric. 

 
 
 

Recent Posts

See All
April Week 2 Updates

This week, we setup the workstation and GPU. So far, I’ve installed Windows 10 and device drivers, memory modules, graphics card, and the...

 
 
 
April 1st Week Summary

This week we are configuring the GPU we will be using to complete the remainder of our project. The GPU used in our project is Nvidia...

 
 
 
March 4th Week Summary

This week the team will complete the following task: - Assure everything is submitted and updated for the upcoming CRA-CREU milestone. -...

 
 
 

Address

Petersburg, VA, USA

Follow

  • Facebook
  • Twitter
  • LinkedIn

©2017 by An Automated Image Analysis Framework for Classification of Diabetic Retinopathy. Proudly created with Wix.com

bottom of page