target-isns recently was added to Rawhide, and will be in a future Fedora release. This add-on to LIO allows it to register with an iSNS server, which potential initiators can then query for available targets. (On Fedora, see isns-utils for both the server, and client query tools.) This removes one of the few remaining areas that other target implementations have been ahead of LIO.
Just got an email full of interesting questions, I hope the author will be ok with me answering them here so future searches will see them:
I searched on internet and I don’t find some relevant info about gluster api support via tcmu-runner. Can you tell me please if this support will be added to the stable redhat targetcli in the near future? And I want to know also which targetcli is recommended for setup (targetcli or targetcli-fb) and what is the status for targetcli-3.0.
tcmu-runner is a userspace daemon add-on to LIO that allows requests for a device to be handled by a user process. tcmu-runner has early support for using glfs (via gfapi). Both tcmu-runner and its glfs plugin are beta-quality and will need further work before they are ready for stable Fedora, much less a RHEL release. tcmu-runner just landed in Rawhide, but this is really just to make it easier to test.
RHEL & Fedora use targetcli-fb, which is a fork of targetcli, and what I work on. Since I’m working on both tcmu-runner and targetcli-fb, targetcli-fb will see TCMU support very early.
The -fb packages I maintain switched to a “fbXX” version scheme, so I think you must be referring to the other one I don’t have any info about the RTS/Datera targetcli’s status, other than nobody likes having two versions, the targetcli maintainer and I have discussed unifying them into a common version, but the un-fun work of merging them has not happened yet.
As mentioned in the beta release notes, the kernel in RHEL 7.2 contains a rebased LIO kernel target, to the equivalent of the Linux 4.0.stable series.
This is a big update. LIO has improved greatly since 3.10. It has added support for SCSI features that enable VMWare VAAI support, as well as data integrity (DIF), and significant iSER work, for those of you using Infiniband. (SRP is also supported, as well as iSCSI and FCoE, of course.)
Note that we still do not ship support for the Fibre Channel qla2xxx fabric. It still seems to be something storage vendors and integrators want, more than a feature our customers are telling us they want in RHEL.
(On a side note, Infiniband hardware is pretty affordable these days! For all you datacenter hobbyists who have a rack in the garage, I might suggest a cheap previous-gen IB setup and either SRP or iSER as the way to go and still get really high IOPs.)
Users of RHEL 7’s SCSI target should find RHEL 7.2 to be a very nice upgrade. Please try the beta out and report any issues you find of course, but it’s looking really good so far.
Contrary to what RHEL 7.1 release notes might say, RHEL 7.1 should be fine as an iSER target, and it should be fine to use iSER even during the discovery phase. There was significant late-breaking work by our storage partners to fix both of these issues.
Unfortunately, there were multiple Bugzilla entries for the same issues, and while some were properly closed, others were not, and the issues erroneously were mentioned in the release notes.
So, for the hordes out there eager to try iSER target on RHEL 7.1 and who actually read the release notes — I hope you see this too and know it’s OK give it a go
I primarily work on Linux, so I put this in my Emacs config:
; Linux mode for C
'((c-mode . "linux") (other . "gnu")))
However, other projects like QEMU have their own style preferences. So here’s what I added to use a different style for that. First, I found the qemu C style defined here. Then, to only use this on some C code, we attach a hook that only overrides the default C style if the filename contains “qemu”, an imperfect but decent-enough test.
'((indent-tabs-mode . nil)
(c-basic-offset . 4)
(tab-width . 8)
(c-comment-only-line-offset . 0)
(c-hanging-braces-alist . ((substatement-open before after)))
(c-offsets-alist . ((statement-block-intro . +)
(substatement-open . 0)
(label . 0)
(statement-cont . +)
(innamespace . 0)
(inline-open . 0)
(block-close . c-snug-do-while)
;; structs have hanging braces on open
(class-open . (after))
;; ditto if statements
(substatement-open . (after))
;; and no auto newline at the end
"QEMU C Programming Style")
(c-add-style "qemu" qemu-c-style)
(defun maybe-qemu-style ()
(when (and buffer-file-name
(string-match "qemu" buffer-file-name))
(add-hook 'c-mode-hook 'maybe-qemu-style)
Gnome since 3.8 has restricted the Blank Screen time to between 1 and 15 minutes, or “Never”, to disable screen blanking/locking entirely. If this isn’t granular enough, you can set other values like so: <del datetime="2014-03-25T22:25:42+00:00">dconf write /org/gnome/desktop/session/idle-delay 1800</del> gsettings set org.gnome.desktop.session idle-delay 1800
The value is in seconds, so here we set the delay to 30 minutes (60*30=1800). It seems that once doing this, the UI will show “Never”, but the set value is still used correctly.
There is also a “Presentation Mode” shell extension that adds a button to inhibit screen lock, but for me, I still wanted to have it automatically lock, but just a little bit slower.
EDIT: dconf didn’t actually work! Apparently gsettings is the way to go.
When doing kernel development, doing it in a virtual machine can be very convenient, if there’s no need for actual hardware devices or features. This is especially true for network or client/server development where multiple physical machines would otherwise be needed. Plus, VMs reboot much faster than actual hardware!
The #1 tip: a shared development directory
My preferred setup is to use KVM via virt-manager. I use my editor and the compiler on the host, and then mount my development directory on the guest, and then install the compiled modules there. This lets development on the host remain undisturbed by unstable kernel versions and new target distro versions. In fact, my host is still on RHEL 6, although I’m working on features for much more current kernels and distro releases.
I use NFS to export a mount point, and then mount it in an identical location in each guest. Then, edit and build the kernel on the host, using the ‘O’ kernel make option to keep .config and build files separate from the kernel’s git tree, although both the source and build dirs are under the mount point accessible to the guest. Finally on the guest, “make O=/path/to/buildfiles modules_install install” and everything’s ready to test.
On a more recent host, and with guests that support it, an easier way to set up a shared directory would be using VirtFS. NFS is a little fiddly to set up for the first time, and virtfs looks pretty easy, and a little faster and secure, even.
Guest debug output onto the host. Set up a virtual serial port and point it at a file on the host. Then, add “console=ttyS0,115200 console=tty0” to the guest kernel command line. This will output everything to the file as well as keep outputting to the guest console. Then ‘tail -f’ the file, and you can be assured any kernel oopses or other messages will be captured. The file will be truncated every time the guest is restarted, BTW.
Turn on all relevant debugging options (under “kernel hacking”) when compiling your kernel. If you start with the distribution’s .config file, many won’t be set. I’d recommend turning everything on. Also turn on frame pointers, and configure out drivers and subsystems the guest won’t need. ‘make localmodconfig’ (run from the guest) might help here.
If you are in an edit/compile loop, use the ‘M=’ make option to just build where you’re hacking, and save make from scanning the whole tree. Just use ‘make modules’ (on the host) and ‘make modules_install’ (on the guest) to save time (posttest??? arrgh) and not reinstall a kernel if it hasn’t actually changed. The build will increment the kernel version, which is printed during build and in ‘uname -v’ output, only if it actually built a new kernel image.
Get an SSD. Pays for itself in saved time almost instantly.
Use ccache. If a source file has already been built with identical headers, ccache keeps the object around and saves the build from doing repeated work. This is as simple as installing ccache on the host, and then setting ‘export CC="ccache gcc"’ in your .bash_profile.
Make sure your guest has two or more virtual CPUs, in order to properly expose yourself to races when testing.
Set guests to auto-login and turn off screen savers. Use Ctrl-R (reverse history search) in bash aggressively when repeating test steps on the guest. Give different guests different-colored backgrounds to tell them apart more easily.
Make sure lockdep checking is enabled, and hasn’t fired earlier in the boot process, because then it turns itself off.
Familiarize yourself with magic-sysrq feature in Documentation/sysrq.txt, and poking it via /proc/sysrq-trigger.
I hope some of this is helpful, and I’d love to know about more tips and techniques people have.
Before speaking at OSCON 2013, I gave a webcast version of my talk, and it’s now posted to Youtube (actual talk starting at 4m30s.)
Tenets of the Unix Way
History of Unix and Linux
Modern Linux and how it may diverge from Unix
The Linux Way — What is it?
The most significant change between this talk and the talk I gave six weeks later was that I figured out what the actual conclusion of the talk was:
Just like the Unix philosophies enabled the Unix command-line to develop and evolve more rapidly, the mix-and-match nature of Linux distros enables the Linux OS to also evolve more rapidly.
A distro is essentially its pool of packages, plus a handful of mutually-exclusive choices about how to run things. For example, init system, packaging system, and update frequency. These are intrinsic to the distro’s make-up, its ‘DNA’, and go beyond mutually-installable packages like apps and even desktop environments.
The cool thing is that the packages themselves are not part of a distro’s DNA. A new distro can rise up, change fundamental things about how the OS runs, and not have to fork those packages. Compared to forking more monolithic Unixes like *BSD or OpenSolaris, the barrier to entry for a new distro is relatively low, which is maybe why there are so many!
We now have a ‘gene pool’ of distros making different choices, and natural selection acts on this pool as users pick which distro they will use, and developers pick which to build on. This ensures that the Linux OS’s evolution is ultimately driven by its users. A popular distro can make some unpopular changes, but if it keeps doing so, eventually it will hurt its user base so much that other distros will take its place. This is a very good thing, and the conclusion I was trying to reach at the end of this webcast.