Home > Misc > Neural networks

My artificial neural networks are not human

In the first two chapter write-ups of Michael Nielsen's book, I've commented a couple of times on the parallels between the neural networks on the computer in front of me and the workings of human brains. Part of this might be well-justified – we do really have lots of neurons in our brains that interact with one another, and the artificial networks I've learned how to code may be correctly abstracting some of those biological processes, and hence correctly recognising handwritten digits.

On the other hand, I think that the terms neural and neuron prime me to make these comparisons. Would I be so quick to compare the code's performance to my own if it was called "many-matrix non-linear optimisation"? I'm not so sure.

In this post I describe a couple of ways in which the networks I've trained, which correctly classify about 98% of the digits in the MNIST test set, show some decidedly non-human behaviour. I'll tell this story in the approximate chronological order that I worked through it, instead of a more logical order; eventually I meander my way to reproducing some of Nguyen et al.'s paper on this subject.

PCA

I don't often use principal component analysis, and my understanding of it is weak. But I know that it exists, and I expected that it would be a reasonable way to classify the MNIST digits, thereby giving me a benchmark to compare the neural networks against. The idea is to treat each of the 784 pixels as a random variable, and then calculate the (784 × 784) covariance matrix using the training images. The eigenvectors of this matrix are the principal components of the data, and the hope is that the first "few" (in descending order of the corresponding eigenvalues) PC's explain most of the variance of the sample data.

(Usually you convert the sample data to z-scores before calculating the covariance matrix. i.e., for each pixel, you subtract the mean over the training images of that pixel, and the divide by the standard deviation of that pixel. I did the first bit but not the second because I was lazy and R's prcomp function spat at me, complaining that one of the pixels had a standard deviation of zero and refusing to divide by it. Unrelatedly, note that since the covariance matrix is symmetric, the eigenvectors are real and orthogonal.)

What do these eigenvectors look like? Well, since the dot product of any pair of them is zero, they usually have some negative entries. But re-scaled so that the entries all lie in [0, 1], the first few eigen-digits are ghostly but recognisable:

I'd like to delve into these principal digits, but that can wait for a separate post (so far I've got to 70% classification accuracy with them). These pictures made me wonder: what would a digit created (somehow) from a neural network look like?

Neural network digits

The first idea I had was to start with a random input image, and use the back-propagation idea to gradient-descend the input pixels while holding constant the weights and biases of the network. I wasn't sure what would happen, and had three guesses:

The middle guess was partly inspired by Google's DeepDream blog post, which showed the neural nets generating, e.g., amusing pictures of dumbbells with arms attached. The latter guess was informed by my memory of this Tumblr post, which (I learned after finding the post again) took the pictures from Nguyen et al., arXiv:1412.1897.

Reality was a mix of 1 and 3. Sometimes it didn't converge (to be fair, I didn't put much effort into make it work), and when it did get, it was difficult to discern much in the resulting pictures. The following are the digits 0 to 9 created by this process. The network classifies nine of them as the desired digit with "probability" from 96% to 99.8%; one of them spectacularly failed to converge, with no output neuron firing past 12%. Which is the odd one out?

The '1' is the one that didn't converge (second from left in the top row). Of the rest, only the 7 stands out to me as plausibly a digit – and even there, it's cheating to say that it worked, because I forgot to invert the pictures, so it should be a light digit on a dark background, when instead it converged the other way.

If I'd read that Google post again, I'd have seen that, about the approach I tried here: "By itself, that doesn't work very well". Instead I figured that the approach was doomed, and went to the Nguyen et al. paper to see what they had done (I was a bit annoyed at the lack of convergence – not just the '1', but the 96% or 98% cases weren't really close enough to 1 for my liking).

A quick skim of the paper revealed that, while also doing gradient descent, they used a genetic algorithm to generate noisy images that the network gave very confident classifications for. So I spent a couple of minutes on Wikipedia, then coded something that, although probably not properly implemented, is at least genetic-ish. It often returned images which the network classified with "probability" greater than 99.99%, which is the important thing.

(I'm typing this up a few days after generating these images, having since made many changes to my code, and I don't use a proper version control system. It looks to me like I might have started inverting the images so that they should be black ink on a white background when making the '2' and the '9' here. Also the base64 encoding of the '2' image looks like it has made it lighter still? These details aren't important for the main thread of this post though.)

None of these images is clearly recognisable as a digit, though most have some sort of visible structure, you can see a bit of a zero in the '0' image. Generally, though, the backgrounds are incredibly noisy. I thought of an idea: perhaps the backgrounds are noisy because usually the pixels near the edge of the image are plain white, so the network gains no information from those areas while training. As a result, any mishmash of pixel values away from the central area won't change the network's classifications much, and the random initialisation of the images in the genetic-y algorithm means that much of the final images is random noise.* That doesn't explain why the central area is also quite noisy, but there is usually some discernible structure near the middle of the pictures above.

*I want to emphasise that this is totally a post-hoc just-so story, even if it happens to be true. I'd have been just as convinced by the opposite story: almost all the training images had plain white pixels near the edge, therefore the network will only give high-confidence classifications if the background is mostly plain white.

I figured that if I averaged over many different output images for the same digit, the noisiness would cancel out and I'd see the essence of what the genetic-y algorithm was converging towards. For whatever reason, the '2's seemed to converge a lot faster than the others, so I asked for a thousand of them, and took the average:

That's... not obviously a '2', but you can sort of see it. Interestingly, while the background is the grey average across random noise, there's definitely whiter pixels near the middle of the image, contrasting with the drawn digit itself. (I haven't made the equivalent images for the other digits, because it'd take more computation time than I think is really warranted on this side-alley.)

To try to investigate what happens when background pixels get scrambled, I opened up the Ubuntu Paint-equivalent and drew a 2:

Just to check that all was working, I ran it through my network, and it classified it, with greater than 99.9% probability, as a 5. Well OK, maybe my mouse-writing isn't very good. So I drew a 9:

The network basically rejected this image. The highest output neuron was the '3', at about 2%. Now, my mouse-writing's not that bad, so in addition to seeing unrecognisably noisy images classified very strongly as digits, we've now also seen very clear digits either strongly misclassified or not classified.

What I think is going on with these hand-drawn images is that I drew to the edge of the image, instead of keeping a clear border like there is in most of the training images. Indeed, when I drew a smaller 2 (albeit a different style, but that shouldn't be an issue), the network called it correctly with "probability" greater than 99.99%:

I never spent much time investigating my original purpose along this line of thought (changing background pixels). As a single data point, though, the network's output layer fired at the 2 and 3 neurons at 46% and 22% respectively (others negligible) with this image as input:

More failed images and code

Earlier I truncated a quote from the Google blog post on making images from neural networks. "By itself, that doesn't work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated."

I haven't got this to work very well. My genetic-y algorithm, now informed by the need to get the overall stats right, goes like this:

Here are some 2's, allegedly:

And here is the mean of 1000 such 2's:

That one's actually not so bad if you zoom in. The biggest problem is that the right-hand part of the arc extends vertically too far (presumably caused by me trying to enforce pixel-neighbour correlations irrespective of location), and the horizontal stroke at the bottom also extends too far.

It was an interesting exercise working through this, even though the results are pretty useless. It was really boring to write this up though.

Here's the R code to calculate the histogram and pixel-neighbour stats (the latter are inspired by the variograms of my day job, though I take absolute values rather than squares). The g_x and g_y are the lists which hold these stats. I haven't tidied this code up, and the histograms are stored as a matrix rather than a list.

load("digit_data/r_images_nielsen.rda")

num_lags = 14

g_x = list()
g_y = list()
length(g_x) = 10
length(g_y) = 10

num_rows = 28
num_cols = 28

bins = seq(0.1, 1, by=0.1)
num_bins = length(bins)

histo = matrix(0, nrow=10, ncol=num_bins)

for (i in 0:9) {
  print(i)
  this_i = which(digits$training == i)
  
  ct = 0
  g_x[[i+1]] = numeric(num_lags)
  g_y[[i+1]] = numeric(num_lags)
  
  for (j in this_i) {
    this_image = matrix(images$training[[j]], nrow=28, ncol=28)
    
    for (k in 1:num_lags) {
      g_x[[i + 1]][k] = g_x[[i + 1]][k] + sum(abs(this_image[ , 1:(num_cols - k)] - this_image[ , (1 + k):num_cols]))
      g_y[[i + 1]][k] = g_y[[i + 1]][k] + sum(abs(this_image[1:(num_rows - k), ] - this_image[(1 + k):num_rows, ]))
    }
     
    this_histo = numeric(num_bins)
    
    # Cumulative distribution:
    for (k in seq_along(bins)) {
      this_histo[k] = length(which(images$training[[j]] <= bins[k]))
    }
    
    # Subtract off the previous cumulative distribution:
    for (k in num_bins:2) {
      this_histo[k] = this_histo[k] - this_histo[k - 1]
    }
    
    histo[i + 1, ] = histo[i + 1, ] + this_histo
    
    ct = ct + 1
  }
  
  g_x[[i + 1]] = g_x[[i + 1]] / (ct * num_rows * ((num_cols - 1):(num_cols - num_lags)))
  g_y[[i + 1]] = g_y[[i + 1]] / (ct * num_cols * ((num_rows - 1):(num_rows - num_lags)))
  
  histo[i + 1, ] = histo[i + 1, ] / (ct * 784)
}

digit_stats = list()
digit_stats$histo = histo
digit_stats$g_x = g_x
digit_stats$g_y = g_y

save(digit_stats, file="digit_stats.Rda")

And here is the Rcpp to do the genetic-y algorithm, which gets appended to the code in my Nielsen chapter 3 writeup. (Edit: It also needs the line #include <queue> at the top.) There's a step where I need to get the indices of the best N images; rather than code this myself (I am not a very good computer science student and don't care about sorting algorithms) I've lifted Romain Fran├žois's from here. It'll be pretty obvious which bits are his, since they look like they're written by someone who knows how to program. My main function here is kinda_genetic_digit().

double abs_cost(NumericVector output, NumericVector wanted_output) {
  long N = output.size();
  double cost = 0.0;
  
  for (long i = 0; i < N; i++) {
    cost += fabs(output[i] - wanted_output[i]);
  }
  
  return cost;
}

// [[Rcpp::export]]
double get_this_cost(NumericVector input, NumericVector wanted_output, NumericVector histo,
                     NumericVector g_x, NumericVector g_y, List network) {
  network = feed_forward(input, network);
  long N = network.size();
  long num_pixels = input.size();
  long lags_x = g_x.size();
  long lags_y = g_y.size();
  long i, j, k;
  List temp_list = network[N - 1];
  NumericVector this_output = temp_list["a"];
  double this_cost = abs_cost(this_output, wanted_output);
  
  // Now the neighbouring-pixel values.
  double vario_cost = 0.0;
  double vario_scale = 1.0 / (double)lags_x;
  NumericVector this_g_x(lags_x);
  NumericVector this_g_y(lags_y);
  
  // Hard-coding the dimensions of the input image!
  long num_rows = 28;
  long num_cols = 28;
  
  for (k = 0; k < lags_x; k++) {
    this_g_x[k] = 0.0;
    
    for (i = 0; i < num_rows; i++) {
      for (j = 0; j < num_cols - k - 1; j++) {
        this_g_x[k] += fabs(input[i*num_cols + j] - input[i*num_cols + j + 1]);
      }
    }
    
    this_g_x[k] /= (double)(num_rows * (num_cols - k - 1));
    vario_cost += vario_scale * fabs(g_x[k] - this_g_x[k]);
  }
  
  for (k = 0; k < lags_y; k++) {
    this_g_y[k] = 0.0;
    
    for (i = 0; i < num_cols; i++) {
      for (j = 0; j < num_rows - k - 1; j++) {
        this_g_y[k] += fabs(input[j*num_cols + i] - input[(j + 1)*num_cols + i]);
      }
    }
    
    this_g_y[k] /= (double)(num_cols * (num_rows - k - 1));
    vario_cost += vario_scale * fabs(g_y[k] - this_g_y[k]);
  }
  
  return this_cost + vario_cost;
}

// http://gallery.rcpp.org/articles/top-elements-from-vectors-using-priority-queue/
template <int RTYPE>
class IndexComparator {
public:
  typedef typename Rcpp::traits::storage_type<RTYPE>::type STORAGE ;
  
  IndexComparator( const Vector<RTYPE>& data_ ) : data(data_.begin()){}
  
  inline bool operator()(int i, int j) const {
    return data[i] > data[j] || (data[i] == data[j] && j > i ) ;    
  }
  
private:
  const STORAGE* data ;
} ;

template <>
class IndexComparator<STRSXP> {
public:
  IndexComparator( const CharacterVector& data_ ) : data(data_.begin()){}
  
  inline bool operator()(int i, int j) const {
    return (String)data[i] > (String)data[j] || (data[i] == data[j] && j > i );    
  }
  
private:
  const SEXP* data ;
} ;

template <int RTYPE>
class IndexQueue {
public:
  typedef std::priority_queue<int, std::vector<int>, IndexComparator<RTYPE> > Queue ;
  
  IndexQueue( const Vector<RTYPE>& data_ ) : comparator(data_), q(comparator), data(data_) {}
  
  inline operator IntegerVector(){
    int n = q.size() ;
    IntegerVector res(n) ;
    for( int i=0; i<n; i++){
      // +1 for 1-based R indexing [deleted -- DB.]
      res[i] = q.top();
      q.pop() ;
    }
    return res ;
  }
  inline void input( int i){ 
    // if( data[ q.top() ] < data[i] ){
    if( comparator(i, q.top() ) ){
      q.pop(); 
      q.push(i) ;    
    }
  }
  inline void pop(){ q.pop() ; }
  inline void push( int i){ q.push(i) ; }
  
private:
  IndexComparator<RTYPE> comparator ;
  Queue q ;  
  const Vector<RTYPE>& data ;
} ;


template <int RTYPE>
IntegerVector top_index(Vector<RTYPE> v, int n){
  int size = v.size() ;
  
  // not interesting case. Less data than n
  if( size < n){
    return seq( 0, n-1 ) ;
  }
  
  IndexQueue<RTYPE> q( v )  ;
  for( int i=0; i<n; i++) q.push(i) ;
  for( int i=n; i<size; i++) q.input(i) ;   
  return q ;
}

// [[Rcpp::export]]
IntegerVector top_index( SEXP x, int n){
  switch( TYPEOF(x) ){
  case INTSXP: return top_index<INTSXP>( x, n ) ;
  case REALSXP: return top_index<REALSXP>( x, n ) ;
  case STRSXP: return top_index<STRSXP>( x, n ) ;
  default: stop("type not handled") ; 
  }
  return IntegerVector() ; // not used
}

double histo_rand(NumericVector cumul_histo) {
  double x = R::runif(0.0, 1.0);
  long num_bins = cumul_histo.size();
  for (long i = 0; i < num_bins; i++) {
    if (x < cumul_histo[i]) {
      double x1 = (double)i / (double)num_bins;
      double x2 = (double)(i + 1) / (double)num_bins;
      return R::runif(x1, x2);
    }
  }
  
  // Hopefully not.
  Rcpp::stop("Bad call to histo_rand().");
  return R::runif(0.0, 1.0);
}

// [[Rcpp::export]]
NumericVector kinda_genetic_digit(NumericVector wanted_output, NumericVector histo,
                                  NumericVector g_x, NumericVector g_y, List network,
                                  long N_total = 100, long N_selected = 10, long max_iter = 1000, double tol=1.0e-5,
                                  double mutate_factor = 1.0) {
  long i, j, k, ct, i1, i2, this_bin;
  bool parent, mutate;
  double random_addition, mutate_max, x;
  long num_layers = network.size();
  List temp_list = network[0];
  NumericVector input_layer = temp_list["a"];
  long num_pixels = input_layer.size();
  
  
  List population(N_total);
  List selected(N_selected);
  NumericVector population_scores(N_total);
  
  long num_bins = histo.size();
  NumericVector cumul_histo(num_bins);
  cumul_histo[0] = histo[0];
  for (i = 1; i < num_bins; i++) {
    cumul_histo[i] = cumul_histo[i - 1] + histo[i];
  }
  
  // Initialise population of images.
  for (j = 0; j < N_total; j++) {
    population[j] = NumericVector(num_pixels);
    NumericVector this_image = population[j];
    
    if (j < N_selected) {
      selected[j] = NumericVector(num_pixels);
    }
    
    for (k = 0; k < num_pixels; k++) {
      this_image[k] = histo_rand(cumul_histo);
    }
    
    population_scores[j] = -1.0 * get_this_cost(this_image, wanted_output, histo, g_x, g_y, network);
  }
  
  // And away we go....
  for (i = 0; i < max_iter; i++) {
    IntegerVector best_few = top_index(population_scores, N_selected);
    
    double this_score = population_scores[best_few[N_selected - 1]];
    
    Rprintf("Cost = %f at iter %d\n", this_score, i);
    if (fabs(this_score) < tol) {
      return population[best_few[N_selected - 1]];
    }
    
    for (j = 0; j < N_selected; j++) {
      NumericVector this_selected = selected[j];
      NumericVector this_from_full = population[best_few[j]];
      for (k = 0; k < num_pixels; k++) {
        this_selected[k] = this_from_full[k];
      }
    }
    
    for (j = 0; j < N_total; j++) {
      i1 = floor(R::runif(0.0, (double)N_selected - 1e-7));
      do {
        i2 = floor(R::runif(0.0, (double)N_selected - 1e-7));
      } while (i1 == i2);
      
      NumericVector parent1 = selected[i1];
      NumericVector parent2 = selected[i2];
      
      NumericVector new_image = population[j];
      for (k = 0; k < num_pixels; k++) {
        parent = (R::runif(0.0, 1.0) > 0.5);
        if (parent) {
          new_image[k] = parent1[k];
        } else {
          new_image[k] = parent2[k];
        }
        
        mutate_max = 0.01 * fabs(this_score) * mutate_factor;
        if (mutate_max > 0.5) { 
          mutate_max = 0.5;
        }
        mutate = (R::runif(0.0, 1.0) < mutate_max);
        
        if (mutate) {
          new_image[k] = histo_rand(cumul_histo);
        }
        
        if (new_image[k] < 0.0) { new_image[k] = 0.0; }
        if (new_image[k] > 1.0) { new_image[k] = 1.0; }
      }
      
      population_scores[j] = -1.0 * get_this_cost(new_image, wanted_output, histo, g_x, g_y, network);
    }
  }
  
  return selected[0];
}

Finally, here is the R code to call the Rcpp and write the images. If you want to use the same network, you can download it here (it's only 600k or so).

library(Rcpp)
library(EBImage)

sourceCpp("train_test_network.cpp")
load("network_97.9pc.Rda")
load("digit_stats.Rda")

num_to_create = 100
# In a woeful convention I can't be bothered fixing, digits_to_test = 3 will generate '2's.
# In general you might loop over 1:10, but when using long g_x and g_y inputs to the cost
# function, the tolerances need to be tweaked digit by digit, so I now only do one digit at
# a time.
digits_to_test = 3:3

ga_digits = list()
length(ga_digits) = length(digits_to_test)

ct = 0

for (i in digits_to_test) {
  print(format(Sys.time(), "%X"))
  ct = ct + 1
  ga_digits[[ct]] = matrix(0, nrow = num_to_create, ncol = 784)
  
  tol = 0.055
  
  j = 0
  while (j < num_to_create) {
    digit_vec = numeric(10)
    digit_vec[i] = 1
    
    new_image = kinda_genetic_digit(digit_vec, as.vector(digit_stats$histo[i, ]), digit_stats$g_x[[i]], digit_stats$g_y[[i]], out_network,
                                    N_total=100, N_selected=5, max_iter=100, tol=tol, mutate_factor=1)
    
    this_cost = get_this_cost(new_image, digit_vec, as.vector(digit_stats$histo[i, ]), digit_stats$g_x[[i]], digit_stats$g_y[[i]], out_network)
    
    if (abs(this_cost) < tol) {
      print(sprintf("i = %d, j = %d", i, j))
      ga_digits[[ct]][j, ] = new_image
      
      temp_list = list()
      temp_list[[1]] = new_image
      this_output = test_network(temp_list, out_network)[[1]]
      print(this_output)
      
      this_img = Image(matrix(1 - new_image, nrow=28, ncol=28))
      out_file = sprintf("ga_input_images/%d/%04d.png", i - 1, j)
      writeImage(this_img, out_file)
      j = j + 1
    } else {
      print(sprintf("Not sufficiently converged, j = %d.", j))
    }
  }
  
  mean_img = colMeans(ga_digits[[ct]])
  this_img = Image(matrix(1 - mean_img, nrow=28, ncol=28))
  out_file = sprintf("ga_input_images/mean_%d.png", i - 1)
  writeImage(this_img, out_file)
}

Bleh. Back to Nielsen's book tomorrow.

Posted 2015-11-12, last updated 2015-11-13 (minor code edit).


Home > Misc > Neural networks