Interface agents and human control
We hear a lot about Radical Trust, with the emphasis being on trusting users (of systems, websites, etc.) to guide organizations. I have tried to sound a skeptical note at times, pointing out that something called “groupthink” is the danger when you decide to trust the wisdom of crowds. I’ve always most admired people whose ideas were extremely unpopular, and who may even have been cast out of their own communities, but whose ideas proved to be true or whose work turned out to be of great value later on. Only because they worked independently and recorded their ideas or art for posterity did we benefit from their thinking. Radical Trust is primarily about trusting the average (or so it seems to me).
Right now, however, I want to focus on a different but related contemporary problem of trust, and that is the trust that we put in increasingly intelligent machines to help us do what we want to do. I’m limiting this discussion mainly to the way search engines work differently than the earlier generation of database interfaces that information professionals used, which were operated using pure Boolean logic. There is a whole host of interface agents, however, that are designed to do some of our thinking for us. “Smart” is the signal, in marketing campaigns, that a new level of AI is being applied in a service, for better or worse. (I remember when I first saw “smart cards” advertised, I thought, “Just what I need – an ATM card that is smarter than me.”)
I received my education in information retrieval at a time when simple Boolean searching was still the norm. Boolean searching meant that the searcher could construct effective search expressions based on clear knowledge of what the machine is doing. Knowledge of how to use Boolean logic combined with knowledge of what is in the database (size of the database relative to the desired results set, likely frequency of search terms, etc.) was what a professional needed to do skillful searches with good results.
A search engine algorithm, on the other hand, works fundamentally differently, and is designed to do some of the user’s thinking for him. Not only does it incorporate relevance ranking in its display of results, but it determines what will be in the results set according to its relevance formula. It does not simply determine what is in or out of the results set according to the presence or absence of search terms. It determines what is in or out of a result set based on a calculation of relevance measured against a numerical threshold. Not all items in the result set will necessarily contain all the search terms, and not all items with a given search term will appear in the results set.
In practical terms, a Boolean interface provides exact control but requires a higher degree of skill, while a search engine offers weaker control but requires less skill. It is true that there is skill involved in effective use of a search engine, and it is true that this skill involves the same kind of knowledge of what is in the database (i.e. being able to roughly predict what kind of a search will work based on a sense of what is out there). But because search engine algorithms are proprietary, complex, and change frequently, it is not possible to have the kind of knowledge of the system’s workings that one would need to control one’s search results nearly as tightly.
This means that we have to trust the interface (just as library patrons who wanted a database search formerly had to trust us as search intermediaries), and give up a degree of control.
The results are sometimes frustrating, and as interfaces become smarter, the frustration can increase rather than decrease. For example, I have noticed recently that Google has started to include similarly spelled words as hits in its results, beyond merely suggesting alternate spellings. This can make it more difficult to search for a person who has a name that is an alternate spelling of a common name (where in the past their odd spelling made it an easier search). Just a small example of the way that a smarter interface can make it harder to do what you want.
Part of the problem is that interface agents are programmed according to the patterns of a common denominator of users, while as information professionals we tend to search differently from the average user. We expect more precision from systems, and as systems do more of our thinking for us, we are losing our ability to get that precision. Most people aren’t interested in the degree of precision that we are, or at least don’t have a clear concept of how to control an interface in order to get it, or the time or inclination to learn the necessary skills.
This problem interests me as a librarian, because I’m concerned about deprofessionalization and disintermediation in our field, but the broader issue also interests me as an observer of society. Not only are interfaces becoming smarter, but the databases with which they interface us (government, commercial, medical) are becoming integrated. This means that we are being encouraged to trust what is gradually becoming a unified interface to a decison-making network of software that makes assumptions based on averages and data of unknown quality. Interfaces that were initially transparent tools have become opaque agents in their own right, with consequences for our ability, ultimately, to have control over our own lives. It seems like sci fi, but to an extent it is already here….