Welcome to Orange3 Image Analytics documentation!¶
Widgets¶
Import Images¶
Import images from a directory(s).
Outputs
- Data: Dataset describing one image in each row.
Import Images walks through a directory and returs one row per located image. Columns include image name, path to image, width, height and image size. Column with image path is later used as an attribute for image visualization and embedding.
- Currently loaded folder.
- Select the folder to load.
- Click Reload to update imported images.
- Information on the input.
- Access help.
You can load a folder containing subfolders. In this case Orange will consider each folder as a class value. In the example above, Import Images loaded 26 images belonging to two categories. These two categories will be used as class values.
Example¶
Import Images is likely the first widget you will use in image analysis. It loads images and creates class values from folders. In this example we used Import Images to load 26 painting belonging to either Monet or Manet.
We can observe the result in a Data Table. See how Orange added an extra class attribute with values Monet and Manet?
Now we can proceed with standard machine learning methods. We will send images to Image Embedding, where we will use Painters embedder to retrieve image vectors.
Then we will use Test & Score and Logistic Regression, to build a model for predicting the author of a painting. We get a perfect score? How come? It turns out, these were the images the Painters embedder was trained on, so a high accuracy is expected.
Image Viewer¶
Displays images that come with a data set.
Inputs
- Data: A data set with images.
Outputs
- Data: Images that come with the data.
- Selected images: Images selected in the widget.
The Image Viewer widget can display images from a data set, which are stored locally or on the internet. The widget will look for an attribute with type=image in the third header row. It can be used for image comparison, while looking for similarities or discrepancies between selected data instances (e.g. bacterial growth or bitmap representations of handwriting).
- Information on the data set
- Select the column with image data (links).
- Select the column with image titles.
- Zoom in or out.
- Saves the visualization in a file.
- Tick the box on the left to commit changes automatically. Alternatively, click Send.
Examples¶
A very simple way to use this widget is to connect the File widget with Image Viewer and see all the images that come with your data set. You can also visualize images from Import Images.
Alternatively, you can visualize only selected instances, as shown in the example below.
Image Embedding¶
Image embedding through deep neural networks.
Inputs
- Images: List of images.
Outputs
- Embeddings: Images represented with a vector of numbers.
- Skipped Images: List of images where embeddings were not calculated.
Image Embedding reads images and uploads them to a remote server or evaluate them locally. Deep learning models are used to calculate a feature vector for each image. It returns an enhanced data table with additional columns (image descriptors).
Images can be imported with Import Images widget or as paths to images in a spreadsheet. In this case the column with images paths needs a three-row header with type=image label in the third row.
Image Embedding offers several embedders, each trained for a specific task. Images are sent to a server or they are evaluated locally on the user’s computer, where vectors representations are computed. SqueezeNet embedder offers a fast evaluation on users computer which does not require an internet connection. If you decide to use other embedders than SqueezeNet, you will need an internet connection. Images sent to the server are not stored anywhere.
- Information on the number of embedded images and images skipped.
- Settings:
- Image attribute: attribute containing images you wish to embed
- Embedder:
- SqueezeNet: Small and fast model for image recognition trained on ImageNet.
- Inception v3: Google’s Inception v3 model trained on ImageNet.
- VGG-16: 16-layer image recognition model trained on ImageNet.
- VGG-19: 19-layer image recognition model trained on ImageNet.
- Painters: A model trained to predict painters from artwork images.
- DeepLoc: A model trained to analyze yeast cell images.
- Tick the box on the left to start the embedding automatically. Alternatively, click Apply. To cancel the embedding, click Cancel.
- Access help.
Embedders¶
InceptionV3 is Google’s deep neural network for image recognition. It is trained on the ImageNet data set. The model we are using is available here. For the embedding, we use the activations of the penultimate layer of the model, which represents images with vectors.
SqueezeNet is a deep model for image recognition that achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. The model is trained on the ImageNet dataset. We re-implemented the SqueezeNet by using weights from the author’s pretrained model. We use activations from pre-softmax (flatten10
) layer as an embedding.
VGG16 and VGG19 are deep neural networks for image recognition proposed by Visual Geometry Group from the University of Oxford. They are trained on the ImageNet data set. We use a community implementation of networks with original weights. As an embedding, we use activations of the penultimate layer - fc7
.
Image Embedding also includes Painters, an embedder that was trained on 79,433 images of paintings by 1,584 painters and won Kaggle’s Painter by Numbers competition. Activations of the penultimate layer of the network are used as an embedding.
DeepLoc is a convolutional network trained on 21,882 images of single cells that were manually assigned to one of 15 localization compartments. We use the pre-trained network proposed by authors. The embeddings are activations of penultimate layer fc_2
.
An article by Godec et al. (2019) explains how the embeddings work and how to use it in Orange.
Example¶
Let us first import images from a folder with Import Images. We have three images of an orange, a banana and a strawberry in a folder called Fruits. From Import Images we will send a data table containing a column with image paths to Image Embedding.
We will use the default embedder SqueezeNet. The widget will automatically start retrieving image vectors from the server.
Once the computation is done, you can observe the enhanced data in a Data Table. With the retrieved embeddings, you can continue with any machine learning method Orange offers. Below is an example for clustering.
Image Grid¶
Displays images in a similarity grid.
Inputs
- Embeddings: Image embeddings from Image Embedding widget.
- Data Subset: A subset of embeddings or images.
Outputs
- Images: Images from the dataset with an additional column specifying if the image is selected or the group, if there are several.
- Selected Images: Selected images with an additional column specifying the group.
The Image Grid widget can display images from a dataset in a similarity grid - images with similar content are placed closer to each other. It can be used for image comparison, while looking for similarities or discrepancies between selected data instances (e.g. bacterial growth or bitmap representations of handwriting).
- Image Filename Attribute: Attribute containing paths to images.
- Image cell fit: Resize scales the images to grid, while Crop crops them to squares.
- Grid size: Set the size of the grid. Click Set size automatically to optimize the projection.
- Tick the box to commit the changes automatically. Alternatively, click Apply.
- Information on the input.
- Access help, save image, and report (in that order).
Example¶
Image Grid can be used to visualize similarity of images in a 2D projection. We have used 5 images of fruits and vegetables, namely orange, banana, strawberry, broccoli and cauliflower.
We loaded the images with Import Images and embedded them with Inception v3 embedder in Image Embedding.
Finally, we visualized the images in Image Grid. It is obvious that broccoli and cauliflower and much more alike than strawberry and banana.
Save Images¶
Save images in the directory structure.
Inputs
- Data: images to save.
Save Images is a simple widget that saves images sent to its input. Images will be saved as separate files in their own directory. When a class is present in the data, images will be saved in subdirectories based on the class variable.
- Attribute containing the path to the image.
- If Scale images to is ticked, images will be resized to the size used in the selected embedder:
- Inception v3: 299x299
- SqueezeNet: 227x227
- VGG-16: 224x224
- VGG-19: 224x224
- Painters: 256x256
- DeepLoc: 64x64
- openface: 256x256
- File format to save images in. See the next section for information on supported formats.
- If Autosave when receiving new data or settings change is on, images will be saved upon every change. Save will save images, while Save as… enables the user to set the name and the folder where to save the images.
Supported Formats¶
Save Images can save images in the following formats:
- .png
- .jpeg
- .gif
- .tiff
- .bmp
- .eps
- .ico
Example¶
Here is a simple example how to use Save Images. We loaded 14 paintings from Picasso, sent them to Image Embedding using Painters embedder, then to Distances using cosine distance and finally to Hierarchical Clustering to construct a dendrogram. Then we selected a cluster from the plot and saved the images belonging to the selected cluster with Save Images.