It all depends upon what country you're in and what system of grading is used. For example, in my M.A. program through an English university, the system uses divisions of acceptable, Merit and Distinction, with the originating assumption that a student can master the material at a medium level. An additional assumption is that each point of knowledge accumulates, whereas, in American schools, the assumption often goes the other direction: it is assumed students can master all the material so marks are taken off for each point of knowledge missed--as opposed to accumulated for each demonstration of more than the expected level of mastery (someone once explained this difference by saying the British system assumption is that--even for a student--a 100% on, for example, an essay meant that scorers/instructors viewed that as the best essay ever written on the subject, making 100% a virtually impossible goal).
Along with having a deep psychological effect foreither good or ill, the difference in assumptions means that students in schools using the British method may be considered to have done better in their studies than those in America when comparable marks are given. So the answer to your question really depends upon (1) where you are and (2) what assumptions are in place that structure the (3) method of evaluation used, and (4) your goals for higher education. In the US, a "C" at 70+% correct information, is the bottom of an acceptable final grade: consistent Cs will limit what colleges you can get into and what courses of study you can pursue. In the English system, 60+ correct information is the minimum expectation while 70+ is Merit and the higher percentages are Distinction.
I suppose, then, the definition of a "'good' final mark" in the USA system would be 80+% while in the British system "good" would be 70+, since "accdeptable" in the US is 70+% (C) and in the British system "acceptable" is 60+: "good" would then be one step up, with "excellent/distinction" being above that.