After a while of using tmux, you might see that you cannot reconnect it from another terminal windows with the error message
error connecting to /tmp/tmux-1000/default (No such file or directory
The solution is easy but hard to find. Here is the magical command worked for me. Hope it works for you too!
pkill -USR1 tmux
Here I like to give a simple run-down to install all requirements to make Selenium available on a Raspi. Basically, we install first Firefox, then Geckodriver and finally Selenium and we are ready to go.
Before start, better to note that ChromeDriver does not support ARM processors anymore, therefore it is not possible to use Chromium with Selenium on Raspberry.
First, install system requirements. Update the system, install Firefox and xvfb (display server implementing X11);
sudo apt-get update sudo apt-get install iceweasel sudo apt-get install xvfb
Then, install python requirements. Selenium, PyVirtualDisplay that you can use for running Selenium with hidden browser display and xvfbwrapper.
sudo pip install selenium sudo pip install PyVirtualDisplay sudo pip install xvfbwrapper
Hope everything run well and now you can test the installation.
from pyvirtualdisplay import Display from selenium import webdriver display = Display(visible=0, size=(1024, 768)) display.start() driver = webdriver.Firefox() driver.get('http://www.erogol.com/') driver.quit() display.stop()
Online Hard Example Mining (OHEM) is a way to pick hard examples with reduced computation cost to improve your network performance on borderline cases which generalize to the general performance. It is mostly used for Object Detection. Suppose you like to train a car detector and you have positive (with car) and negative images (with no car). Now you like to train your network. In practice, you find yourself in many negatives as oppose to relatively much small positives. To this end, it is clever to pick a subset of negatives that are the most informative for your network. Hard Example Mining is the way to go to this.
In general, to pick a subset of negatives, first you train your network for couple of iterations, then you run your network all along your negative instances then you pick the ones with the greater loss values. However, it is very computationally toilsome since you have possibly millions of images to process, and sub-optimal for your optimization since you freeze your network while picking your hard instances that are not all being used for the next couple of iterations. That is, you assume here all hard negatives you pick are useful for all the next iterations until the next selection. Which is an imperfect assumption especially for large datasets.
Okay, what Online means in this regard. OHEM solves these two aforementioned problems by performing hard example selection batch-wise. Given a batch sized K, it performs regular forward propagation and computes per instance losses. Then, it finds M<K hard examples in the batch with high loss values and it only back-propagates the loss computed over the selected instances. Smart hah ? 🙂
It reduces computation by running hand to hand with your regular optimization cycle. It also unties the assumption of the foreseen usefulness by picking hard examples per iteration so thus we now really pick the hard examples for each iteration.
If you like to test yourself, here is PyTorch OHEM implementation that I offer you to use a bit of grain of salt.
Let's directly dive in. The thing here is to use Tensorboard to plot your PyTorch trainings. For this, I use TensorboardX which is a nice interface communicating Tensorboard avoiding Tensorflow dependencies.
First install the requirements;
pip install tensorboard pip install tensorboardX
Things thereafter very easy as well, but you need to know how you need to communicate with the board to show your training and it is not that easy, if you don't know Tensorboard hitherto.
... from tensorboardX import SummaryWriter ... writer = SummaryWriter('your/path/to/log_files/') ... # in training loop writer.add_scalar('Train/Loss', loss, num_iteration) writer.add_scalar('Train/Prec@1', top1, num_iteration) writer.add_scalar('Train/Prec@5', top5, num_iteration) ... # in validation loop writer.add_scalar('Val/Loss', loss, epoch) writer.add_scalar('Val/Prec@1', top1, epoch) writer.add_scalar('Val/Pred@5', top5, epoch)
You can also see the embedding of your dataset
from torchvision import datasets from tensorboardX import SummaryWriter dataset = datasets.MNIST('mnist', train=False, download=True) images = dataset.test_data[:100].float() label = dataset.test_labels[:100] features = images.view(100, 784) writer.add_embedding(features, metadata=label, label_img=images.unsqueeze(1))
This is also how you can plot your model graph. The important part is to give the output tensor to writer as well with you model. So that, it computes the tensor shapes in between. I also need to say, it is very slow for large models.
import torch import torch.nn as nn import torchvision.utils as vutils import numpy as np import torch.nn.functional as F import torchvision.models as models from tensorboardX import SummaryWriter class Mnist(nn.Module): def __init__(self): super(Mnist, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) self.bn = nn.BatchNorm2d(20) def forward(self, x): x = F.max_pool2d(self.conv1(x), 2) x = F.relu(x)+F.relu(-x) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = self.bn(x) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) x = F.log_softmax(x) return x model = Mnist() # if you want to show the input tensor, set requires_grad=True res = model(torch.autograd.Variable(torch.Tensor(1,1,28,28), requires_grad=True)) writer = SummaryWriter() writer.add_graph(model, res) writer.close()