Time for another example of mixing web technologies inside a native application (as I promised before). As usual, we start with a screenshot:
(Since we always want to be cross-platform, Windows and Mac OS X users need not worry as this example will work exactly the same there, it would just look slightly different)
This is basically a tool, given a directory in the file system, to crawl and produce an interactive treemap showing the allocated size for files and subdirectories. The entire code is available in the usual X2 git repo (or the alternative repo), look under webkit/foldervis
(you need Qt 4.6 or later versions). Keep in mind that the visualization is interactive. When you click on a certain block, it will expand (with some nice animation) and show what is underneath that block.
For the interface, we use the excellent webtreemap component from Evan Martin (try the live demo). It’s a treemap implementation using DOM, CSS, and JavaScript. Other treemap implementations exist, check for example Nicolas’ Canvas-based treemap demo based on his InfoVis toolkit.
The crawling process itself is carried out in a simple Qt-based class. Let’s have a look at this native world. First of all, we need a simple structure to hold the crawling result:
struct Entry {
QString name;
int size;
QList<entry> children;
};
</entry>
And then let’s declare a class of the crawler implementation, conveniently called Crawler
. The reason why it is a QObject (with some signals and slots) will be obvious soon:
class Crawler: public QObject
{
Q_OBJECT
Q_PROPERTY(QString tree READ tree)
public:
Crawler(QObject *parent);
QString tree() const;
public slots:
void start(const QString &dir);
signals:
void progress(int count);
void finished();
private:
QString m_dir;
int m_count;
Entry m_rootEntry;
protected:
Entry search(const QString &dir);
};
Crawling is triggered by calling its start()
function, passing the name of the directory to be crawled. Surprisingly, the implementation of this function is really simple since it passes the flow to the search()
function which does the actual heavy-duty crawling.
void Crawler::start(const QString &dir)
{
m_dir = dir;
m_count = ;
m_rootEntry = search(m_dir);
emit finished();
}
Since we want to traverse to all the subdirectories, this is a recursive process:
Entry Crawler::search(const QString &path)
{
QList<entry> children;
int total = ;
m_count++;
emit progress(m_count);
QApplication::processEvents();
QFileInfoList list = QDir(path).entryInfoList();
for (int i = ; i < list.count(); ++i) {
Entry entry;
QFileInfo fi = list.at(i);
if (fi.fileName() == "." || fi.fileName() == "..")
continue;
if (fi.isDir() && fi.baseName() != ".") {
entry = search(fi.absoluteFilePath());
} else {
entry.name = fi.fileName();
entry.size = fi.size();
}
total += entry.size;
children.append(entry);
}
Entry entry;
entry.name = QFileInfo(path).fileName();
entry.children = children;
entry.size = total;
return entry;
}
If you are familiar with Qt, nothing is mysterious about the above implementation. Note also that before processing every directory, we trigger the progress
signal. We will find out soon how we can use that.
An important point I would like to make is the use of processEvents()
. Since this is supposed to be an example, we keep the code as simple as possible, without the use of thread and synchronization and other similar magic. Thus, the call to processEvents
is necessary to allow Qt main event loop to process all the events, including firing signals and invoking the corresponding slots.
So up till now, we are basically in the native side of the application. The web side is about the webtreemap component. We run this conveniently inside our own subclass of QWebView:
class Visualizer: public QWebView
{
Q_OBJECT
public:
Visualizer(const QString &dir);
private slots:
void setup();
private:
QString m_dir;
Crawler *m_crawler;
};
which has the following setup:
Visualizer::Visualizer(const QString &dir)
: QWebView()
, m_dir(dir)
, m_crawler(new Crawler(this))
{
setFixedSize(600, 600);
frame->setScrollBarPolicy(Qt::Horizontal, Qt::ScrollBarAlwaysOff);
frame->setScrollBarPolicy(Qt::Vertical, Qt::ScrollBarAlwaysOff);
load(QUrl("qrc:/index.html"));
QWebFrame *frame = page()->mainFrame();
frame->addToJavaScriptWindowObject("crawler", m_crawler);
QFile file(":/bootstrap.js");
file.open(QFile::ReadOnly);
QString bootstrap = file.readAll();
file.close();
page()->mainFrame()->evaluateJavaScript(bootstrap);
QTimer::singleShot(250, this, SLOT(setup()));
}
Let’s analyze what happens there. First of all, for simplicity we set a fixed window size and therefore we can get away with no scrollbar at all. We also load the main HTML file which embeds webtreemap, right from the resource by using Qt’s compact resource system. Like in the previous CodeMirror demo, this eases the deployment since everything is packaged right with the executable.
This example’s resource contains the following files:
index.html webtreemap.js bootstrap.js
We need webtreemap.js
for the webtreemap implementation and the main index.html
to be loaded right into the QWebView (or rather, our own subclass). The purpose of bootstrap.js
is clear if we look at the contents:
crawler.progress.connect(function(count) {
document.getElementById('progress').textContent = 'Crawling ' + count +
' directories...';
});
crawler.finished.connect(function() {
document.getElementById('progress').style.display = 'none';
appendTreemap(document.getElementById('map'), JSON.parse(crawler.tree));
});
The first part connects the signal named progress
from an object called crawler
to the implemented function, which basically just updates the text of an element in the web page, serving as a nice feedback to the user that the crawling is still on going (in particular, if the folder contains thousands of files). If you recall Visualizer
constructor, there is a line which calls QWebFrame’s addToJavaScriptWindowObject() with an instance of the Crawler
class, thereby adding a new object to the web page. When that object’s progress
is emitted (from the native C++ side), the given JavaScript function (in the web world) is invoked. This demonstrates the signal-slot connection from a native QObject to the other side of the bridge.
The same thing happens with the other signal, finished()
, which got emitted when the crawler is completed. This time, we need to supply our webtreemap widget the data. crawler.tree
actually comes from a property named tree
of that Crawler class again. What it contains is the JSON formatted (in string) data of the entire directory tree which also holds the size of each entry.
Surprisingly, that’s all the bridging you need!
All in all, the native part is pretty thin (sloccount gives around 150 lines), mostly the code to crawl the file system. By leveraging DOM-based webtreemap, in a short time we wrap the result in a nice and interactive visualization.
Web technologies are nice, aren’t they?