edge_detector tool
The first tool was suggested by Evan Williamson, Digital Infrastructure Librarian at the U of I, prompted by the large number of archeological projects that we were collaborating on, including this example of an archeological excavation at the local Moscow High School which unearthed this paleolithic bug juice container and He-Man leg.

All of these projects were controlled context photographs of the object beside a ruler, possibly a color swatch, with varying backgrounds and lighting setups. The idea was to identify and extract the objects and, since different fellowship collaborators wanted these objects to be reproduced with both white and black backgrounds, we thought providing both of these options as well as a PNG file with a transparent background would work best.
✺

The tool implements a neural network called IS-Net, originally developed for a paper titled “Highly Accurate Dichotomous Image Segmentation” by Xuebin Qin and others in 2022. The model executes fine grain, binary foreground/background segmentation and it is often used for isolating retail objects for online marketplaces. The model is completely open access, requiring the user to open and close sessions within the Python script to use, but not requiring a monetary API key.
✺

There are a few iterations that can be implemented under the greater rembg (remove background) library. At first, I was using the original u2net model, but found that the isnet-general-use model is more accurate and produces finer lines around the object, if a little slower to process.
✺

In both tools you’ll notice I have a fairly Python for Dummies
approach to structure, with original images being dropped into folder A, transparent backed PNGs into B, white backgrounds into C and black into D. After cloning this repository in GitHub, you can just follow the steps outlined in the setup.md file for either Mac or Windows users to use these scripts.
✺