Note: I haven't watched the video because 5 hours and I hold a very low view of Peterson so I find him hard to listen to at the best of times, I'm just going off the questions here>>Maybe you get a higher good when there is an opposition between good and evil.>>Maybe the good you get when good and evil are both possibilities is a higher good than the good you get with just good.
I have caveated answers to these that can be summed up with a *shrug*, but ultimately I don't see why I (or anyone really) should inherently even value this "top reached height of good" metric. Here's some thought experiments:
1. Consider the immune system. It's a good thing to have as it fights disease, and a stronger and more robust (asterisks, caveats, etc. apply) immune system is better. What drove it to get stronger over an evolutionary time were the pathogens it had to deal with. Now, imagine you meet somebody who has a button. This button, upon being pressed, would cure all disease, forever, from now into all the future. This person is considering pressing it, but hesitates. After all, doing so would deprive this good thing, the immune system, of value. It'll probably even make it atrophy over time, and certainly it won't get stronger and as mentioned before: stronger is better. Yet after the button is pressed, there is no such "better" anymore. What do you say to this person? Do you congratulate them on their wisdom, or do you try to convince them (how?) to press the button anyway?
2. Consider a machine making chocolate bars. It just plods along, consistently churning those out. Now, imagine the company running the factory gave it a sentient AI which might decide to do all sorts of things to the bar, from inducing off flavours all the way to making it straight-up poison. This AI probably won't do any of that. As far as we can tell it's excellently socialized, it's got a good moral backbone, and is overall a swell sentient being. So, has this change improved the resultant chocolate bars? Are they now better because the sentient entity making them MIGHT have made them bad (poison even!) but hasn't? What if a bunch of other factories follow suit, and some of those AIs aren't quite as well "educated", making the risk of making a bad bar greater? Does that make the good bars even better, further increasing the metric of "max goodness of best chocolate bar"? And is any arrangement of this actually preferable to you as a consumer compared to the initial "sentience-free machine just plods along, consistently just churning out good ones"?
3. Consider a fire in an apartment building which trapped a child. A random passerby runs in and saves the child, at a risk to themselves. The heroism is a great thing, and it could never have happened without the fire. "Therefore, we should not take any measures to prevent fires in the future, as no fires means no such heroes". Does this track? Would this attitude really make a more preferable world for us to live in? On the other hand, would it be better for there to be fewer fires, even if it meant there'd be fewer such acts of heroism?