A human can use many different parts of their anatomy to trigger an interaction with a computer. The most used method is through tactile user interfaces (touch) such as clicks, touch, buttons, faders or turn pots.
In the last 10 years, technological advancements have allowed humans to use other methods of interaction.
One of the most prominent methods which threatens to take over tactile/touch methods is auditory user interfaces (sound-based). Consider the way in which smart home hubs such as Amazon’s Echo with Alexa and integrated operating system software such as Siri for the iPhone allow humans to ask computers questions. If that computer device is networked to the World Wide Web then most answers can be retrieved in seconds.
Other methods of interaction which are starting to become more common include visual user interfaces such as iris recognition or face recognition. These methods tend to be used more for security (for example, to unlock a device) or for those with physical impairment.
Take a look at this article and read about human user interface interaction of the near future.
When designing a user interface for a human, you must consider the differences across mankind and barriers which some humans have in accessing technology. We call this the digital divide.
In order to narrow the digital divide, a user interface must consider:
Research a range of different devices (e.g. mobile phones, tablets, robots) and investigate the type of user interface features that are used and how the user interacts with them.