Elsevier

Neuropsychologia

Volume 46, Issue 9, July 2008, Pages 2383-2388
Neuropsychologia

Predictive force programming in the grip-lift task: The role of memory links between arbitrary cues and object weight

https://doi.org/10.1016/j.neuropsychologia.2008.03.011Get rights and content

Abstract

We tested the ability of healthy participants to learn an association between arbitrary sensory cues and the weight of an object to be lifted using a precision grip between the index finger and thumb. Right-handed participants performed a series of grip-lift tasks with each hand. In a first experiment, participants lifted two objects of equal visual appearance which unexpectedly and randomly changed their weight. In two subsequent experiments, the change in object weight was indicated by cues, which were presented (i) visually or (ii) auditorily. When no cue about the weight of the object to be lifted was presented, participants programmed grip force according to the most recent lift, regardless of the hand used. In contrast, participants were able to rapidly establish an association between a particular sensory cue with a given weight and scaled grip force precisely to the actual weight thereafter, regardless of the hand used or the sensory modality of the cue. We discuss our data within the theoretical concept of internal models.

Introduction

Everyday we handle hundreds of objects in the environment that impose novel mechanical constraints, such as weight, shape and surface friction. When grasping and lifting an object using a precision grip between the index finger and thumb, we have to generate a grip force (exerted against the object's surfaces) which has to be sufficient to compensate for the vertical lift force (tangential to the grip surfaces) in order to prevent object slippage. On the other hand, we also have to avoid excessive grip forces as, for example, fragile objects may crush. One crucial issue then is how we adjust our grip forces in an environment offering a huge diversity of objects with various mechanical characteristics? To date, the details of the underlying behavioural processes remain to be explored. Based on our life-long experience with objects of different sizes and shapes we probably use transformational processes when lifting a novel object. After a visual analysis of the object, with emphasis on its size and shape, we use visuomotor transformations to select the necessary grip and lift forces based on a default value for object density (Gordon, Forssberg, Johansson, & Westling, 1991; Mon-Williams & Murray, 2000). Somatosensory input from the grasping fingers allows a more accurate scaling of grip and lift forces to the mechanical object properties. When lifting a given object repetitively, accurate force adjustment is typically achieved within 2–3 lifts (Johansson & Westling, 1984). Once established the memory between a given object and the grip force necessary to lift it is easily transferred across hands, suggesting generalisation from one hemisphere to the other, and also maintains in memory for up to 24 h (Gordon, Forssberg, & Iwasaki, 1994).

However, in daily life we often lift objects whose visual appearance does not allow for a prediction of its weight to select the appropriate grip forces in advance. For example, grasping and lifting a stein of Bavarian beer does not allow for a visual judgement about its content. It appears that our motor system chooses a simple way out of the dilemma: we scale our grip force to match the mechanical properties of the object we lifted most recently (Johansson & Westling, 1984; Nowak & Hermsdörfer, 2003). Such behaviour, however, results in prediction errors, i.e., we apply forces that are too low when we lift a heavier than expected object, and forces that are too high when we lift a lighter object than expected. Such predictive scaling of grip force is initiated well before lift-off until somatosensory information regarding the object's mechanical properties, i.e., its weight, is available (Flanagan & Johansson, 2002; Johansson & Westling, 1988). During lifting the object successfully, we are able to obtain the relevant somatosensory information to rapidly establish an internal representation of the mechanical object properties to be remembered and adjust grip forces accordingly (Johansson & Westling, 1984; Nowak & Hermsdörfer, 2003).

The occurrence of prediction errors when lifting objects of similar visual appearance, but different mechanical properties, is well known since the early descriptions of Johannson, Westling and co-workers (Johansson and Westling, 1984, Johansson and Westling, 1988). In contrast, the ability to use arbitrary visual cues to match them with the mechanical properties of objects to be lifted has only recently been discovered (Chouinard, Leonard, & Paus, 2005; Cole & Rotella, 2002; Nowak, Koupan, & Hermsdörfer, 2007). By now, we know that healthy people can use learned associations between arbitrary colour cues and mechanical object features, such as weight and surface friction, to scale grip force in a predictive manner (Cole & Rotella, 2002; Nowak et al., 2007). It is yet unclear, however, if people can use sensory cues of different modalities, such as visual and auditory stimuli, to learn such associations. In addition, it is unknown if people establish a memory link between an arbitrary sensory cue and the weight of an object to be lifted, regardless of the hand performing the grip-lift task.

Here we investigate the effect of employing sensory cues, such as arbitrary visual (different colours) or auditory cues (different tones), when grasping and lifting an object. Healthy participants were asked to use the precision grip between index finger and thumb to lift different weights in random order with either hand. In the no cue experiment, a non-informative neutral visual stimulus was presented prior to each lift thereby not allowing any judgement about which weight to be lifted. In the cue experiments, either an arbitrary colour cue or an arbitrary auditory cue provided advance information about which of the two weights participants would have to lift in the subsequent trial. Based on earlier data suggesting rapid learning of associations between arbitrary visual cues and mechanical object properties (Chouinard et al., 2005; Cole & Rotella, 2002; Nowak et al., 2007) we hypothesise that healthy people are able to use sensory cues of different modalities to predict the weight of objects to be lifted. From the observations that (i) healthy participants scale grip force precisely to the mechanical object properties within a few lifts, regardless of the performing hand (Johansson and Westling, 1984, Johansson and Westling, 1988), and (ii) the acquired information related to the mechanical object features is easily transferred in between both hands (Gordon et al., 1994), we expect that predictive scaling of grip force based on learned associations is rapidly established, regardless of the hand performing the task.

Section snippets

Participants

Ten healthy participants (seven female, aged between 23 and 53 years; mean age 29 ± 8 years), who were completely naive to the specific purposes of the experiments, participated. All participants were right-handed according to a handedness questionnaire (Crovitz & Zener, 1962). Informed consent was obtained prior to testing and all the procedures had been approved by the local Ethics Committee.

Apparatus

Participants grasped a cylindrical and cordless instrumented object, mounted on top of an opaque plastic

Results

The average slip forces for the light object (0.4 kg) were 4.2 ± 0.6 N for the right hand and 4.4 ± 0.7 N for the left hand. The average slip-forces for the heavy object (0.6 kg) were 6.3 ± 0.6 N (right hand) and 6.5 ± 0.6 N (left hand). There were no significant differences between slip forces for the right or left hand, regardless of object weight (P > 0.05 for each comparison). Thus, any observed differences between left and right hand performance are unlikely to result from differences in friction at the

Discussion

The present set of experiments was designed to investigate whether healthy adults are able to use arbitrary sensory cues, such as different colours or different tones, to match them with objects of different weight allowing predictive force scaling during the grip-lift task. Our data enhance the current knowledge about the processes underlying predictive grip force control during object manipulation by the following important issues: (i) the formation of an association between an arbitrary

References (23)

  • H.F. Crovitz et al.

    A group for assessing hand and eye dominance

    The American Journal of Psychology

    (1962)
  • Cited by (29)

    • Sensorimotor learning of dexterous manipulation

      2018, Human Inspired Dexterity in Robotic Manipulation
    • Arbitrary visuomotor mapping in the grip-lift task: Dissociation of performance deficits in right and left middle cerebral artery stroke

      2012, Neuroscience
      Citation Excerpt :

      In contrast, peak grip force occurs well after lift-off around the time of peak lift force and is therefore more prone to corrective force adjustments triggered by peripheral feedback mechanisms (Johansson and Westling, 1988). In the “no cue” condition, when no advance information about the weight to be lifted in the upcoming trial was available, our patients performed very similar to healthy subjects and scaled the peak rates of grip force increase and peak grip forces according to the most recent lift (Ameli et al., 2008; Nowak et al., 2009). Such behavior results in prediction errors.

    • Arbitrary visuo-motor mapping during object manipulation in mild cognitive impairment and Alzheimer's disease: A pilot study

      2011, Clinical Neurology and Neurosurgery
      Citation Excerpt :

      When no colour cue provided advance information about the mass to be lifted in the upcoming trial, patients selected predictive grip forces according to the mass lifted in the preceding trial as do healthy subjects [21]. When colour cues offered information about what mass should be lifted in the next trial (“cue” experiment), however, healthy subjects were able to use this information to select predictive force scaling based on learned visuo-motor mappings between arbitrary colours and object masses [22,25]. While this ability was selectively impaired in patients with AD.

    View all citing articles on Scopus
    View full text