We see Tigress used in many research projects, and this is very satisfying! We encourage anyone working in reverse engineering to try to attack Tigress-generated code, and anyone working in software protection to compare their techniques against the standard set of techniques provided by Tigress. That said, we often see papers where researchers have used Tigress and where the analyses they perform are not as accurate as one would like. Here are some common problems we've seen:
If you're working on a project that requires an "evaluation-by-obfuscation"-component, the first thing to do is to read the paper "Evaluation Methodologies in Software Protection Research" [https://dl.acm.org/doi/10.1145/3702314] by Bjorn De Sutter, Sebastian Schrittwieser, Bart Coppens, and Patrick Kochberger. With respect to using Tigress for evaluation, they write:
Recommendation: Tigress is a complex tool that requires thoughtful decision-making, and hence a considerable effort, to select which protections to deploy, on which program fragments, and with which configurations. Make sure to describe your choices and provide convincing arguments for them. Importantly, over time, the default configuration options for Tigress have evolved, implying that the defaults that will be mentioned in the future on the Tigress website for its latest version might no longer reflect the default options of the version you used at the time of your research. So, make sure to mention what the default options are if you rely on them in your research.
They also point out how important it is to carefully document exactly which Tigress obfuscation script you used in your research, and which version of Tigress you used:
All papers using Tigress mention at least which SPs [Software Protection] were deployed with it, but in many cases, the authors omit the used configuration options, of which Tigress offers a wide variety. Some mention they used default configurations, but as these evolve over time, that is insufficient for reproducing the research and for interpreting the results
And, finally, they note that it is surprisingly unusual for researchers to extend obfuscation tools, even the ones that are available to academics:
SP researchers mostly use tools as is, even the flexible ones, rather than customizing them the way attackers do. Consider the Tigress obfuscator, of which the developers share its source code with colleagues in academia on demand. Still, we observed that no outsider papers (i.e., not involving members of Christian Collberg’s team behind Tigress) discuss or evaluate extensions of improvements of Tigress’ transformations. At most, other authors combine Tigress protections with their own or with other existing tools. Despite OLLVM being open source, we made similar observations, albeit to a lesser degree: 7/35 OLLVM papers papers extend OLLVM.
This is true. Only rarely has Tigress been extended outside of my own group. However, I am always keen to collaborate, so please get in touch if you have some interesting ideas to work on! As we described in the development chapter, Tigress has been designed from the ground up to be extensible.
Finally, please allow me to rant a bit. When you do
practical research where the final outcome is 1) an "artifact" (a
software tool) and 2) measurements of the performance of that tool, it
is essential that you publish not only the measurements, but also the
tool itself. You should do this not only because it's the "right thing
to do" as an academic, but because if you don't, you hamper future
research. Time and time again I have had papers rejected (I told you I
was going to rant!) because I was trying to make the argument "my tool
is better than their tool!", and reviewers (rightly) pointed out that
without access to the tool, I couldn't make that
argument.
With my colleague Todd Proebsting, I went so far as
to study how common it is for Computer Systems researchers to not make
their software artifacts available. You can read about it in Repeatability in computer systems
research [https://dl.acm.org/doi/10.1145/2812803].