• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Need some help from the forums for an experiment in Machine Learning for grading comics.
2 2

74 posts in this topic

1 minute ago, RhialtoTheMarvellous said:

Are you confused about this because there are multiple categories in the comic book example (10.0,9.9,9.8,9.6... etc) versus say the dog/cat example?

That isn't really an issue. You can establish any number of different categories for the computer to evaluate as long as you have the sample data for those categories. You could have the computer evaluate dog/cat/fox/beaver if you wanted to. This is in fact what Google and other big data companies do for their image search.

I'm not confused at all. It's a prime example of a false equivalent. It ain't close to that easy. (shrug)

Link to comment
Share on other sites

I've thought about doing this as well. Problem is having the raw data to train.I'm sure CGC has tremendous amount of data available -- whether or not they would share is doubtful.If it works well, they would be less dependent on human errors, and could cut on pre-screening at the least.  I believe it would add value. Maybe even better than humans for volume. 

What type of learner are you using? What language/framework?

I see some posters arguing about interior, but the reality is the outer appearance holds a much greater weight toward the grade, in general.

Edited by bronze_rules
Link to comment
Share on other sites

8 minutes ago, RhialtoTheMarvellous said:

Are you confused about this because there are multiple categories in the comic book example (10.0,9.9,9.8,9.6... etc) versus say the dog/cat example?

That isn't really an issue. You can establish any number of different categories for the computer to evaluate as long as you have the sample data for those categories. You could have the computer evaluate dog/cat/fox/beaver if you wanted to. This is in fact what Google and other big data companies do for their image search.

 

In fact, you could get better granularity (all the way to continuous) without any problems. There's no limit in ML data input or output type.

Link to comment
Share on other sites

50 minutes ago, RhialtoTheMarvellous said:

In what way do you find this comparison flawed?

What attributes does the machine build up in the model from a few thousand images of dogs and cats which then allows it to take a completely new image and assign a value of dog or cat to it?

yes but can the machine GRADE the dogs and the cats? I think this is the real question here.

Link to comment
Share on other sites

1 minute ago, bronze_rules said:

I've thought about doing this as well. Problem is having the raw data to train.

I'm sure CGC has tremendous amount of data available -- whether or not they would share is doubtful.

If it works well, they would be less dependent, and could cut on pre-screening at the least.

I believe it would add value. Maybe even better than humans for volume.

What type of learner are you using?

I'm using ML.NET with TensorFlow as it's kind of the easy button to pull in and train a model as I'm familiar with the .NET environment. I could go look up the algorithm it defaults to if you are curious.

I pulled all of the images off of the CGC registry to make an initial attempt at this because all of them have metadata showing the score. I made a scraper that went through each page and downloaded the front/back images. I only downloaded images where I found a front and back, but that still ended up giving me a lot of random placeholders. I cleaned most of that out. One problem is that the distribution of data is highly skewed as you can see from this screenshot. There are a ton of 9.8 items, like 5x as much as the next nearest category and then 7000 times as much as the smallest category. The other problem is that the images overall are pretty low quality and don't really show defects and they are all in holders.

This is interesting though because it makes me think that there are a lot of folks throwing moderns at CGC to get a 9.8 rating on them to resell for more money.


image.png.ddf4eecfb44026573e3a73fb37a36493.png
 

Link to comment
Share on other sites

3 minutes ago, Dyeon Xmas Roy said:

Not if the graders are professional and objective. The back cover should carry as much weight as the front, for example. The "cover only" grading is for fanboys. 

Disagree. Obviously back scans would add data, but again, it's hard to get -- UNLESS-- CGC would want to share, which I doubt. You need a lot of data to train reasonably well. I think the front information alone would bring us close. The idea is to get a general grade close to what it would be. Obviously, there are going to be outliers, and humans can help here. But it's not much different than how humans approach it. The more general jobs go quicker.

Link to comment
Share on other sites

13 minutes ago, RhialtoTheMarvellous said:

I'm using ML.NET with TensorFlow as it's kind of the easy button to pull in and train a model as I'm familiar with the .NET environment. I could go look up the algorithm it defaults to if you are curious.

I pulled all of the images off of the CGC registry to make an initial attempt at this because all of them have metadata showing the score. I made a scraper that went through each page and downloaded the front/back images. I only downloaded images where I found a front and back, but that still ended up giving me a lot of random placeholders. I cleaned most of that out. One problem is that the distribution of data is highly skewed as you can see from this screenshot. There are a ton of 9.8 items, like 5x as much as the next nearest category and then 7000 times as much as the smallest category. The other problem is that the images overall are pretty low quality and don't really show defects and they are all in holders.

This is interesting though because it makes me think that there are a lot of folks throwing moderns at CGC to get a 9.8 rating on them to resell for more money.


image.png.ddf4eecfb44026573e3a73fb37a36493.png
 

I guess I was asking what kind of learner being used -- I'm guessing deep learning (seeing tensorflow), which would be good here. Yeah, going to need a lot more data than that to get anything reasonable, IMO. Tons of features need to be evaluated over many samples.  Good quality and consistent scans are also important - otherwise features get obscured or don't really exist. It would be nice to see what kind of features the learner finds useful. I'd guess it might find long creases would be major feature for <= VG category -- for example.

Another useful experiment you could try with limited data, is just categorize above 5 or below, for example. Literal cat vs. dog. See how well it works in/out sample.

Edited by bronze_rules
Link to comment
Share on other sites

2 minutes ago, bronze_rules said:

I guess I was asking what kind of learner being used -- I'm guessing deep learning (seeing tensorflow), which would be good here. Yeah, going to need a lot more data than that to get anything reasonable, IMO. Tons of features need to be evaluated over many samples.

Yeah, it's a deep learning algorithm. Right now I'm just fiddling with the model builder tool they overlay on top of the architecture. I need to get into the guts and make an actual program so I can tweak some of the options to see if I can get better results, but overall I think I probably need more data.

Link to comment
Share on other sites

12 minutes ago, bronze_rules said:

Disagree. Obviously back scans would add data, but again, it's hard to get -- UNLESS-- CGC would want to share, which I doubt. You need a lot of data to train reasonably well. I think the front information alone would bring us close. The idea is to get a general grade close to what it would be. Obviously, there are going to be outliers, and humans can help here. But it's not much different than how humans approach it. The more general jobs go quicker.

I don't see a real downside to CGC sharing this sort of data overall. It's not like it cuts into their business having a machine that can grade comics. You're not so much paying for the number as you are for the certification aspect and encapsulation assuring that number is correct. If anything a machine evaluator would just be the equivalent of the "Hey can you spare a grade" forum, people will still have to examine the thing no matter what.

Link to comment
Share on other sites

In principle, this really is an interesting target for machine learning. In practice, this is going to need a lot of data points, and I'm not sure how realistic it is to get pre-slab scans of even 100 books at each grade point. But I do wish you the best of luck. It's a worthwhile endeavor.

Link to comment
Share on other sites

1 minute ago, Qalyar said:

In principle, this really is an interesting target for machine learning. In practice, this is going to need a lot of data points, and I'm not sure how realistic it is to get pre-slab scans of even 100 books at each grade point. But I do wish you the best of luck. It's a worthwhile endeavor.

I'm starting to realize that. It seems unlikely that without some coordination I could even get the data for this sort of sample on even one book. If I make the attributes more generalized so they can be applied to any book then that might work.

Link to comment
Share on other sites

Actually I was discussing this very subject with my wife the other night..how to get the grade of a comic book from photo scan(s). Don't ask why. But we concluded that it is impossible, at least in the practical sense. Lol. It is an incredibly difficult endeavor because it requires large amounts of data and you're trying to translate something that is very subjective into objective, empirical, repeatable results. And really, grading involves assessing interior pages, staples, and I think side views. That's a lot of images for one comic book. That said..if you take a step back and remove the image recognition part of the problem and treat it much like designing a credit score model, it could lead to something very interesting.

Here's how I envision the effort to be, and this is the 30,000 feet view:

  • Control your experiment to one comic book. Looks like you've got that covered. But I would have chosen Hulk #181.
  • Work on the grading model. Define every type of flaw, perhaps with the assistance of the Overstreet grading guide(?), giving each flaw type a score and weight.
  • Take a set of comics and with the flaw type chart you designed, have a person very knowledgeable and capable of grading comics to document flaws on each book (I'm actually laughing as I write this because it is so error-prone)
  • With the documented flaws, calculate the grade giving the aggregated score and weights. The result should fall within the NG - 10.0 range.
  • Translate the above model and calculation into code, specifically an API and run a bunch of permutations against it.

Might be simpler to build J.A.R.V.I.S (shrug)

 

 

Link to comment
Share on other sites

11 minutes ago, ComicsAndCode said:

Actually I was discussing this very subject with my wife the other night..how to get the grade of a comic book from photo scan(s). Don't ask why. But we concluded that it is impossible, at least in the practical sense. Lol. It is an incredibly difficult endeavor because it requires large amounts of data and you're trying to translate something that is very subjective into objective, empirical, repeatable results. And really, grading involves assessing interior pages, staples, and I think side views. That's a lot of images for one comic book. That said..if you take a step back and remove the image recognition part of the problem and treat it much like designing a credit score model, it could lead to something very interesting.

Here's how I envision the effort to be, and this is the 30,000 feet view:

  • Control your experiment to one comic book. Looks like you've got that covered. But I would have chosen Hulk #181.
  • Work on the grading model. Define every type of flaw, perhaps with the assistance of the Overstreet grading guide(?), giving each flaw type a score and weight.
  • Take a set of comics and with the flaw type chart you designed, have a person very knowledgeable and capable of grading comics to document flaws on each book (I'm actually laughing as I write this because it is so error-prone)
  • With the documented flaws, calculate the grade giving the aggregated score and weights. The result should fall within the NG - 10.0 range.
  • Translate the above model and calculation into code, specifically an API and run a bunch of permutations against it.

Might be simpler to build J.A.R.V.I.S (shrug)

 

 

All true, but the point of deep learning is to find the features by itself. Hulk 181 seems like a good sample.

The beauty of deep learning would be to discover general features that match humans, then compare and maybe have a better model and features that aren't readily quantifiable.

Edited by bronze_rules
Link to comment
Share on other sites

Interesting experiment.  Ironically it might make more sense to ignore the specific comic approach for two reasons.  First, as you noted, it's hard to find enough of one comic to teach the algorithm.  Secondly the difference between a 9.8 ASM 300 and a 9.8 Action Comics 1 is going to be greater than the difference between a 9.8 ASM 300 and a 1.0 ASM 300.

Link to comment
Share on other sites

2 hours ago, bronze_rules said:

All true, but the point of deep learning is to find the features by itself. Hulk 181 seems like a good sample.

The beauty of deep learning would be to discover general features that match humans, then compare and maybe have a better model and features that aren't readily quantifiable.

That's an interesting point. I definitely was not looking at the problem with an evolving model in mind - I realize this is where the deep learning comes in. I have to admit, I know little about the subject. My perspective is one of removing the human factor out of the process and yet still uphold a standard. And that standard being a well defined model worked out by a consortium of reputable graders and is completely public information. 

But if the model is ever evolving, then I am not sure how practical it would be to the comic community. And if the ultimately goal is to have a "better model," that's great but when does one or group of people decide it is finally "better"?

 

Link to comment
Share on other sites

1 hour ago, ComicsAndCode said:

That's an interesting point. I definitely was not looking at the problem with an evolving model in mind - I realize this is where the deep learning comes in. I have to admit, I know little about the subject. My perspective is one of removing the human factor out of the process and yet still uphold a standard. And that standard being a well defined model worked out by a consortium of reputable graders and is completely public information. 

But if the model is ever evolving, then I am not sure how practical it would be to the comic community. And if the ultimately goal is to have a "better model," that's great but when does one or group of people decide it is finally "better"?

 

Ideally a good machine learner would find things that the graders themselves might not be aware of. On the one hand, there might be a finite set of rules that are to be followed by graders (e.g. see Overstreet grading guide, x spine ticks allowed, length of ticks no greater than x inches, etc). But, I would bet graders don't follow those rules to a T. There's the subjective element.

Years ago I was speaking to a grader at an LCS and he didn't grade an issue I wanted, yet. He asked me what I would grade it at, and I was about 1/2 to 1 grade lower than he would put it at. I was also strictly looking at grading defects and rules set out by Overstreet. He told me that he was taught by the owner that you have to look at grading from a 'gestalt' perspective. I think I had it around F+/VF and he had VF/NM- (something like that). Clearly, it was a nice looking copy and I got his view, but there was something like a spine crease exceeding 1/4 inch or something that I was a stickler about rules. I accepted his grade, because I thought it presented well and wanted it.

Now the reality is, much like real estate, the buyer and seller will probably skew in two different directions with a bias. Machine learning tries to eliminate that and come up with a more objective set of rules/features. I would also bet a lazy (or motivated to hurry or uncertain) grader would look at the ML grade and say, yeah that's about right. That would be the goal here. I do think ML can accomplish this and it would be a huge positive for cgc to speed up pre-screening at low cost. The problem is they have to share the data in order to get someone to develop. You need tons of data to get any kind of reasonable performance. And I think if anyone has the 'best' data set available to train a learner, it's cgc. On top of that I know some very cheap world class developers that work on this kind of thing, if cgc was willing to share publicly.

 

Edited by bronze_rules
Link to comment
Share on other sites

I was giving this some thought recently and theres a lot of good info in this thread, but I think trying to "grade" the comic by comparing the current scale  to image scans may be the wrong approach. 

I think the best approach for getting machines to analyze comics is teaching them what "defects" are and then letting them analyze the scan of the book, noting and counting all defects across the book. I think this data set would be easier to compile as well as you wouldn't need millions of images, just a general rule set for evaluating any scan. The computer can analyze the image at a greater magnification and note pixels that don't follow consistent/known comic book patterns and such and easily put out a list of all those defects. 

Then you would just have to make a grading scale that accounted for types and numbers of defects noted.

Link to comment
Share on other sites

Since CGC has stated that generally they do not scan before and only after if you paid/requested one, I believe a better chance at you getting the images with grades you seek,  would be one of the professional pressers who document / scan all their work. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
2 2