By User not found


2011-04-17 02:13:59 8 Comments

I'm currently dealing with a multiplayer combat system where the damage dealt by the players is always multiplied by a random factor between 0.8 and 1.2.

In theory, a truly random RNG may eventually yield the same number many times (see the Tetris dilemma). This could result in a match where player is always making very high damage while the other always makes very low damage.

What can I do to make sure this doesn't happen? Are some RNGs better than others at avoiding repetition?

5 comments

@Beefster 2019-10-14 20:19:17

Use a shifting bias

The base generator uses a uniform distribution between \$0\$ and \$1\$ to generate \$r\$. Initially set a bias value, \$b\$, to \$0\$.

The overall distribution will be biased by the following formula:

\$ r^{\exp (-b)} \$

The effect here is that when \$b\$ is positive, the resulting number will be biased toward \$1\$. When \$b\$ is negative, the resulting number will be biased toward \$0\$.

Take this number and scale it appropriately to the desired range.

Each time a player rolls favorably, subtract from the bias. Each time the player rolls unfavorably, add to the bias. The amount changed could be scaled by how (un)favorable the roll is or could be a flat amount (or a combination). You will need to adjust specific values to fit the feel you are going for.

@Kromster says support Monica 2012-07-16 13:28:52

Sid Meier had an excellent speech on GDC 2010 just about this topic and Civilization games. I will try to find and paste the link later. In essence - perceived randomness is not the same as true randomness. To make things feel fair you need to analyze previous results and pay attention to players psychology.

Avoid streaks of bad luck at all costs (if previous two turns were unlucky next one should be guaranteed to be lucky). Player should be always luckier than AI opponent.

@Jonathan Dickinson 2012-07-16 09:40:07

I actually wrote some code to do this. The gist of it is using statistics to correct unlucky streaks. The way you can do this is to keep track of how many times the event has occurred and use that to bias the number generated by the PRNG.

Firstly, how do we keep track of the percentage of events? The naive way of doing this would be to keep all numbers ever generated in memory and average them out: which would work but is horribly inefficient. After a little thinking I came up with the following (which is basically a cumulative moving average).

Take the following PRNG samples (where we proc if the sample is >= 0.5):

Values: 0.1, 0.5, 0.9, 0.4, 0.8
Events: 0  , 1  , 1  , 0  , 1
Percentage: 60%

Notice that each value contributes to 1/5 of the final result. Let's look at it another way:

Values: 0.1, 0.5
Events: 0  , 1

Notice that the 0 contributes to 50% of the value and the 1 contributes 50% of the value. Taken slightly further:

Values: [0.1, 0.5], 0.9
Events: [0  , 1  ], 1

Now the first values contribute 66% of the value and the last 33%. We can basically distil this down to the following process:

result = // 0 or 1 depending on the result of the event that was just generated
new_samples = samples + 1

average = (average * samples / new_samples) + (result * 1 / new_samples)
// Essentially:
average = (average * samples / new_samples) + (result / new_samples)

// You might want to limit this to, say, 100.
// Leaving it to carry on increasing can lead to unfairness
// if the game draws on forever.
samples = new_samples

Now we need to bias the result of the value sampled from the PRNG, because we are going for a percentage chance here things are a lot easier (versus, say, random amounts of damage in a RTS). This is going to be hard to explain because it 'just occurred to me'. If the average is lower it means that we need to increase the chance of the event occurring and visa-versa. So some examples

average = 0.1
desired = 0.5
corrected_chance = 83%

average = 0.2
desired = 0.5
corrected_chance = 71%

average = 0.5
desired = 0.5
corrected_change = 50%

Now what 'occurred to me' is that in the first example 83% was just "0.5 out of 0.6" (in other words "0.5 out of 0.5 plus 0.1"). In random event terms that means either:

procced = (sample * 0.6) > 0.1
// or
procced = (sample * 0.6) <= 0.5

So in order to generate an event you would basically use the following code:

total = average + desired
sample = rng_sample() * total // where the RNG provides a value between 0 and 1
procced = sample <= desired

And therefore you get the code that I put in the gist. I am pretty sure this all can be used in the random damage case scenario, but I haven't taken the time to figure that out.

Disclaimer: This is all home-grown statistics, I have no education in the field. My unit tests do pass though.

@Kylotan 2012-07-16 11:08:05

Looks like an error in your first example because both a 0.1 and a 0.9 value result in a 0 event. But you're basically describing keeping a cumulative moving average (en.wikipedia.org/wiki/Moving_average#Cumulative_moving_aver‌​age) and correcting based on that. One risk is that each result would be significantly inversely correlated with the prior result, though this correlation would decrease over time.

@Kylotan 2012-07-16 11:11:48

I would be tempted to alter this to use a 'leaky integrator' system instead: start with the average initialised to 0.5 and instead of counting samples pick arbitrary constant value (eg. 10, 20, 50, or 100) which does not get incremented. Then at least the correlation between 2 subsequent values is constant throughout the use of the generator. You can also tweak the constant value - larger values mean slower correction and more apparent randomness.

@Jonathan Dickinson 2012-07-16 11:46:03

@Kylotan thanks, thanks for providing the name. I am not sure exactly what you mean with your second comment - maybe provide a new answer?

@User not found 2012-07-16 11:52:16

That's quite clever and doesn't have the limitations of arrays. I understand Kylotan's suggestion, which is to initialize samples at its maximum value (in this case, 100) from the start. That way, it doesn't take 99 iterations for the RNG to stabilize. Either way, the one disadvantage I can see with this method is that it does not guarantee fairness, it simply ensures a constant average.

@Jonathan Dickinson 2012-07-16 12:04:59

@jSepia - indeed, you would still get runs of fairness/unfairness but they would be followed (usually) by a balanced run. E.g. In my unit test I 'forced' 100 non-procs and was met with ~60 procs when I did the real samples. Under uninfluenced situations (if you look at the code) a 50% proc usually sees, at worst, runs of 2/3 in either direction. But one player could have a run allowing them to defeat the other player. If you want to bias it more strongly to fair: total = (average / 2) + desired.

@user744 2011-04-17 08:44:22

You can solve it the same way Tetris does, by making a preset list of damage results and shuffling.

Let's say you know the player is going to deal 0.8x to 1.2x damage with a linear distribution. Take the list [0.8, 0.9, 1.0, 1.1, 1.2]. Shuffle it randomly, so you get e.g. [1.2, 1.0, 0.8, 0.9, 1.1].

The first time the player deals damage, they deal 1.2x. Then 1x. Then, etc, to 1.1x. Only when the array is empty should you generate and shuffle a new array.

In practice, you'll probably want do this to 4+ arrays at once (e.g. start with [0.8,0.8,0.8,0.8,0.9,0.9,0.9,0.9,...]). Otherwise the period of the sequence is low enough that players can figure out whether their next hit is "good" or not. (Although that can also add more strategy to the combat, as in Dragon Quest IX's Hoimi table, which people figured out how to probe by looking at healing numbers and tweak until you're guaranteed a rare drop.)

@Adam 2011-04-17 10:56:51

To make it a bit more random you could always have half the list as random numbers, and the other half calculated as (2-x) to get the average correct.

@user744 2011-04-17 11:21:54

@Adam: That method really only works for this particular example; if you are dealing out Tetris pieces rather than damage multipliers, what's 2 - S block?

@Kylotan 2011-04-17 16:15:59

The usual term for this is sort of system is "random without replacement". It's just analogous to using a deck of cards instead of dice, really.

@o0'. 2011-04-17 17:21:44

Even better, you could do half the numbers really random, and only half of them subject to this rule.

@user744 2011-04-17 18:59:39

@Lo'oris: I can't see how that's in any way better.

@o0'. 2011-04-17 19:04:51

@Joe: because it is more "really random", but still avoids too long streaks (which was what the OP wanted).

@user744 2011-04-17 19:07:40

It can still result in the local distribution not resembling the global distribution, which is exactly what the question doesn't want. Terms like "really random" are vague pseudomathematics; the more you define what statistical properties you want, the clearer your intent and game design will be.

@John McDonald 2012-07-16 15:38:26

There's a modification to the popular board game Settlers of Catan that uses 36 shuffled cards (one for each dice result), but instead of keeping all 36 cards every round, you discard 4 of the cards off the top to provide a little uncertainty.

@Runonthespot 2012-08-01 13:34:58

Also take a look at ShuffleBags which implement this sort of behaviour quite neatly - (kaioa.com/node/53)

@coderanger 2011-04-17 02:19:58

What you are asking for is actually the opposite of most PRNGs, a non-linear distribution. Just put in some kind of diminishing returns logic in your rules, Assuming that everything over 1.0x is a "critical hit" of some kind, just say that each round your chances of getting a crit go up by X, until you get one at which point they reset to Y. You then do two rolls each round, one to determine crit or not, and then another for the actual magnitude.

@Michael 2011-04-17 02:42:22

This is the general approach I'd take, you use the uniform distribution of the RNG but transform it. You could also use the output of the RNG as an input to your own custom distribution that re-adjusts itself based on recent history, i.e. to force variance in the outputs, so that it looks "more random" in human perception terms.

@coderanger 2011-04-17 02:46:00

I actually know of an MMO that does something like this, but the chance of a crit actually increases each time you get one until you don't get one, then it resets to a very low value. This leads to rare streaks of crits which are very satisfying to the player.

@Michael 2011-04-17 02:53:16

Sounds like a good alg, long dry spells have always been frustrating, but it doesn't lead to crazy streaks of crits.

@user744 2011-04-17 08:53:23

Fixing this doesn't require a nonlinear distribution, it just requires that short time-sequential subsets of the distribution have the same properties as the distribution itself.

@dreta 2012-07-16 13:45:29

this is how Blizzard games do it, atleast since Warcraft 3

Related Questions

Sponsored Content

3 Answered Questions

[SOLVED] Random Zelda-style map generation

3 Answered Questions

[SOLVED] How to use Moving Average to avoid lucky streaks in RNG

2 Answered Questions

[SOLVED] Pseudo random number generation in compute shader

1 Answered Questions

5 Answered Questions

[SOLVED] How to improve this random number generation in my context?

  • 2016-06-13 16:17:31
  • Daniyal Azram
  • 2252 View
  • 11 Score
  • 5 Answer
  • Tags:   random

1 Answered Questions

[SOLVED] How do "procedural" and "random" generation differ?

2 Answered Questions

[SOLVED] Random number hlsl

  • 2012-07-20 20:12:35
  • bobobobo
  • 17314 View
  • 13 Score
  • 2 Answer
  • Tags:   hlsl random gpu

2 Answered Questions

[SOLVED] How do I produce "enjoyably" random, as opposed to pseudo-random?

  • 2012-05-28 13:31:52
  • Hilton Campbell
  • 2253 View
  • 26 Score
  • 2 Answer
  • Tags:   algorithm random

1 Answered Questions

[SOLVED] Deterministic random number sequence

  • 2010-11-18 21:06:50
  • Glaeken
  • 464 View
  • 5 Score
  • 1 Answer
  • Tags:   rendering random

Sponsored Content