Fix live link facial animations if it isn't recognizing the emotion

Having some trouble with the Live link motion capture. It interprets an open smile more as anger grin adding too much of unnecessary blendshapes to it. Basically, it almost can not distinguish the smile from the grin, using the same mix of 6 mouth blendshapes in slightly different proportions.
(For smile it uses: 0.8-0.9 MouthSmile blendshape
0.5-0.6 MouthLowerDown (which fits for anger, but not for mile)
While for frown it uses: 0.8 -0.9 MouthLowerDown
0.4-0.5 MouthSmile

So I was wondering if there’s any way to fix some key expressions, lowering specific parameters when others predominate?
Like Smile and frown are the opposite, I’d like to avoid mixing them together if one of them is high enough to be considered as the main one.