Human Machine Guitar Hero

Adaptive gaming companion that knows your play style

🎉 Proceedings of 2024 Meaningful Play 🎉

Type: Independent Research

Date: Jan 2024 - Oct 2024

Role: Researcher, Systems Designer, Developer

Technology: Processing, p5.js, librosa (python audio processing library)

Proceedings of 2024 Meaningful Play Conference

Note 🎉 This project was featured at the 2024 Meaningful Play Conference! 🎉

Nik Kim, "Human-Machine Guitar Hero: Developing a cooperative AI agent a human player can play with", Meaningful Play 2024 Proceedings, Pittsburgh, Apr. 2025. pp. 79-100, doi: 10.17613/z2x3h-79d21.

Play Game 🎮

HMGH is hosted and playable via web. Experience cooperative play with a machine player. If you're curious about the underline mechanism of this machine player, check project repository here.

Anatomy of Human-Machine Guitar Hero

HMGH regenerates Guitar Hero as a two player cooperative game. Human and machine player both thrives to earn the best socre they could achieve. Both interesting and clever twist HMGH uses is that players earn big bonus scores when each of their contribution to the total score remains roughly equally. So the player should not only work hard to play individually but also strive to distribute contribution level eqaully through out the game. Below diagram illustrates the game mechanism.

A diagram that shows cybernetic game mechanism
Fig 1. Cybernetic game mechanism
A diagram that shows anatomy of Human Machine Guitar Hero
Fig 2. Anatomy of Human Machine Guitar Hero

Both human and machine player interacts with the game environment. They take action toward the environment and receives feedback (measurement) about their play. Since both players' objective is to achieve higher score, each come up wi strategy. Their strategy affects their future actions as a form of action plan. Most importantly, individual's action plan broadcast to the their parter using the game system.

Engineering Details

Computing Human Player's Technical Skill Level

We need two materials to plot decision quadrant, player's skill level and management level. Let's first talk about some mathematics behind computing human player's skill level. Raw skill level is determined with three sources, total score that human player earned, the number of faulty play, lastly the number of missed shots. Put these into basic formula:

let raw_skill = (this.phumanScore - fault_weight * this.humanFault) / (this.phumanScore + humanMiss);

However, I processed this raw_skill value once again by weighting the result with difficulty of the pattern the player played. And there are some interesting idea going under computing difficulty, which is sparseness and Moran's I of given binary matrix.

Computing Human Player's Management Ability

To compute the player’s management level, which measures how well they maintain equal contribution during gameplay, I tracked the total time that contribution equality was maintained. I then divided this value by the total elapsed play time. This calculation produces a score between 0 and 1.

However, the more interesting aspect lies in how I increased the difficulty of maintaining equality as the game progressed. Simply checking a fixed ratio of human and machine contributions is not sufficient. As the game continues and the accumulated score increases, it becomes easier to maintain equality within a fixed margin. For example, if 100 points have been collected and the equality window is ±5 percentage points, then a difference of just 10 points will break the equality condition. But when 1000 points are collected, the same window allows a difference of 100 points—making it less strict.

To address this, I gradually decreased the equality window as players maintained equality over time, effectively increasing the difficulty and encouraging finer control in contribution balancing.

// 1. Computing player's management level
computeHumanPlayerManagement(score) {
    let now = millis();
    let gameDuration = now - score.startTime;
    let managingAbility = score.eqDuration / gameDuration;
    let mappedManagingAbility = map(managingAbility, 0, 1, 440, 160);
    this.humanManagement.push(new Tuple(managingAbility, mappedManagingAbility));
    return mappedManagingAbility;
}
// 2. Computing equality maintained time
computeContribution() {
this.dContribution = abs(this.hContribution - this.mContribution);
let now = millis();
if (this.lastCheckedTime === 0) this.lastCheckedTime = now;
let delta = now - this.lastCheckedTime;
this.lastCheckedTime = now;

// This threshold gives dynamic equality condition as game proceeds
let threshold = max(0.5, 10 * pow(0.8, this.boost));

// Maintaining equal condition is extremely favorable to accumulate bigger bonus points
// This is because boost is increased and set by every 10s of elapsed time of eq maintanance
if (this.dContribution <= threshold) {
    if (this.savedTime === 0) {
        this.savedTime = millis();
        this.boost = 0;
    }
    this.eqDuration += delta;
} else {
    // Reset to 0
    this.savedTime = 0;
    this.boost = 0;
}

let elapsed = now - this.savedTime;
let nextBoostLevel = int(elapsed / 10000); // every 10s

if (this.savedTime !== 0 && nextBoostLevel > this.boost) {
    this.boost = nextBoostLevel;
    this.bonusTime = millis();
    this.deltaBonusPoint = (this.score * this.boost / 10) * 0.5;
    this.bonusPoint += this.deltaBonusPoint;
    }
}

This code ensures that boost tracking and equality duration are handled separately. While bonus points are triggered when equality is maintained within a 10-second interval, the total duration of equality is continuously accumulated in the variable eqDuration on every frame as long as the equality condition is met.

Audio Processing

HMGH is fundamentally a rhythm game. Therefore, generating beats corresponding to music was important. To automatically generate beat patterns that fits with music, I used python library called librosa. Librosa is a well-known python package for music and audio analysis. Specifically, I used onset detection related methods to capture new sound events during the music.

Modeling Cooperative Machines : Decision Quadrant and Action Table

Play style decision quadrant
Fig 4. Play style decision quadrant
Machine’s action table
Table 1. Machine’s action table

The most interesting part is: “How should the gameplay be designed in terms of the machine player?” To model a machine player’s play, we should answer the following three questions. First, what to learn? Second, how to model decisions? Third, how to model actions?

What to learn?

In an ideal scenario, the machine agent would also learn and adapt to the game environment over time. However, in this prototype, the machine’s knowledge of the environment is pre-programmed—in other words, it functions as a perfect player from the outset. Instead, the focus of this project is on the second aspect: learning about its human counterpart. The machine player closely monitors the human player’s every in-game action, using this information to determine the human’s play style.

How to model decisions?

So, how can we model such decisions? Refer to Figure 3. The agent uses two numerical values to assess the human player. The skill level reflects the player’s accuracy in hitting notes at the correct timing, while the managing level represents the player’s ability to maintain a balanced contribution between the human and machine players.

Using these two dimensions, the machine maps the player’s behavior onto a playstyle quadrant. The x-axis corresponds to the skill level, and the y-axis represents the managing level. Based on the human player’s gameplay data, the machine plots the data on the quadrant and classifies human players into one of four play styles: Master, Strategic, Novice, or Solo.

How to model actions?

One last piece remains. how to model actions? Based on the previous decision, the machine player will act based on its action table, as shown in Table 1. For example, if the machine player decides the human player is a Novice, then it will change the string configuration in a way that human play is minimized. On the other hand, when the player is a Solo, then the machine will let the human player be engaged actively ensuring the contribution level is maintained. When the player is a Master, the machine will follow its counterpart’s action plan.

Supporting Conversing Systems

An image that shows state of human signaling to machine parter. Corresponding string glows to yellow color.
Fig 5(a). Human signaling
An image that shows state of machine signaling to human parter. Corresponding string glows to purple color.
Fig 5(b). Machine signaling

As the final component, the game must provide a means for communication between the human and machine players. This prototype supports an intuitive conversing system. In Figure 3.5a, the human player presses key 1 to indicate that they will take responsibility for string number 1. Figure 3.5b, illustrates a scenario where the machine player signals that it will take over string number 2 from the human player. In both cases, the selected string lights up with a color corresponding to the player, visually reinforcing the communication.