index

Seeing the potential applications of CLIP as a result of using it today has been eye opening. The first place my mind went was to the potential for this to be an incredibly useful tool from GLAM. If every images of piece in the Canadian War Museum or The Canadian National Gallery were put into a zip file and fed to a model like CLIP it would make the job of specific search and categorization much simpler. The test I did was entering images of three people into the model and asking it which is most related to 'football' and for this example it said football is most closely associated with an image of Tom Brady. When the input was "happy person" the model identified that Pierre Trudeau was the happiest of the bunch, likely because he was smiling while the others were not. What this shows is that it is capable of not only finding associations with specific ideas like 'football' but also more abstract concepts like happiness. Pulling this back around to the idea of how this could apply to GLAM we can see a couple of important uses emerge from the fact that specific ideas and more abstract themes can be entered into a model. This could be used to categorize images in a gallery's collection based on theme and make the job of the curator who might have had to skim through all of their pieces to match the theme of a given exhibition significantly easier. If all of the uncategorized and neglected artifacts in the war museum had pictures taken of them these pictures could then be fed to a model like CLIP which would in turn be able to classify them to fit whatever organizational system the museum has in place. Of course these are only a few examples of the ways in which technology such as this could be applied to GLAM and many more applications of models like CLIP certainly exist. For future reference I will avoid using files with spaces when interacting with models to avoid having to put quotations around every instance where it occurs.