Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2018

ProcessOne: Real-time Stack Issue #11

ProcessOne curates two monthly newsletters – tech-focused Real-time Stack and business-focused Real-time Enterprise. Here are the articles concerning tech aspects of real-time development we found interesting in Issue #11. To receive this newsletter straight in your inbox on the day it’s published, subscribe here.

How to Build a Private One to One Chat App from Scratch?

In this article, the development of one to one chat app from scratch is discussed in detail. The technical stack involved in WhatsApp like chat app are Erlang (Language), Ejabberd (Framework), XMPP protocol, MySQL (Database). The following steps will help you to develop a one to one chat application in an hassle-free manner.

Deploy Bulletproof Embedded Software in Elixir with Nerves

Nerves defines an entirely new way to build embedded systems using Elixir. It is specifically designed for embedded systems, not desktop or server systems. It consists of a minimal Buildroot-derived Linux that boots directly to the BEAM VM, ready-to-go library of Elixir modules to get you up and running quickly and powerful command-line tools.

Connecting an Elixir Node to the Bitcoin Network

Pete Corey writes: “Since I first started diving into the world of Bitcoin development, I’ve wanted to build a simple node that connects to the network. The Elixir programming language gives us some fantastic tools to implement a server in the peer-to-peer network. Let’s see how far they can take us!”

Riot: A Distributed IRC and VOIP Client and Home Server

Riot is a free and open source decentralized instant messaging application that can be considered an alternative to Slack. This article takes a look at features of Riot, installation procedure and usage.

Elixir’s Phoenix Powering a Real-time Web UI

A problem WallarooLabs needed to solve early on was deciding on the tooling that would power our metrics monitoring system. They needed a monitoring solution to provide real-time updates on the several steps a data message may take along the way within a Wallaroo application. This post takes a deeper dive into the monitoring problem and how Phoenix and Elixir helped solve specific issues.

Fortnite: Postmortem of Service Outage at 3.4M CCU

Epic Games posted a postmortem on, among other things, their XMPP outage. They had a situation while mitigating a known instability problem that resulted in overloading a downstream system component and effectively paralyzing presence flow. Without presence, a user who is a friend cannot see that a player is online, breaking most of Fortnite social features including the ability to form parties.

June 19 2018

Paul Schaub: Summer of Code: The demotivating week

I guess in anybodies project, there is one week that stands out from the others by being way less productive than the rest. I just had that week.

I had to take one day off on Friday due to circulation problems after a visit at the doctor (syringes suck!), so I had the joy of an extended weekend. On top that, I was not at home that time, so I didn’t write any code during these days.

At least I got some coding done last week. Yesterday I spent the whole day scratching my head about an error that I got when decrypting a message in Smack. Strangely that error did not happen in my pgpainless tests. Today I finally found the cause of the issue and a way to work around it. Turns out, somewhere between key generation and key loading from persistent storage, something goes wrong. If I run my test with fresh keys, everything works fine while if I run it after loading the keys from disk, I get an error. It will be fun working out what exactly is going wrong. My breakpoint-debugging skills are getting better, although I still often seem to skip over important code points during debugging.

My ongoing efforts of porting the Smack OX code over from using bouncy-gpg to pgpainless are still progressing slowly, but steady. Today I sent and received a message successfully, although the bug I mentioned earlier is still present. As I said, its just a matter of time until I find it.

Apart from that, I created another very small pull request against the Bouncycastle repository. The patch just fixes a log message which irritated me. The message stated, that some data could not be encrypted, while in fact date is being decrypted. Another patch I created earlier has been merged \o/.

There is some really good news:
Smack 4.4.0-alpha1 has been released! This version contains my updated OMEMO API, which I have been working on since at least half a year.

This week I will continue to integrate pgpainless into Smack. There is also still a significant lack of JUnit tests in both projects. One issue I have is, that during my project I often have to deal with objects, that bundle information together. Those data structures are needed in smack-openpgp, smack-openpgp-bouncycastle, as well as in pgpainless. Since smack-openpgp and pgpainless do not depend on one another, I need to write duplicate code to provide all modules with classes that offer the needed functionality. This is a real bummer and creates a lot of ugly boilerplate code.

I could theoretically create another module which bundles those structures together, but that is probably overkill.

On the bright side of things, I passed the first evaluation phase, so I got a ton of motivation for the coming days :)

Happy Hacking!

June 17 2018

Ignite Realtime Blog: Smack 4.3.0-rc1 and 4.4.0-alpha1 released

@flow wrote:

The Smack developer community is proud to announce the availability of the first release candidate of Smack 4.3. Users of Smack are encouraged switch to the new 4.3 release family of Smack. The Smack 4.3 API is considered frozen and the API changes between 4.2 and 4.3 are not as significant compared to the changes between Smack 4.1 and 4.2. More information can be found in the Readme of Smack 4.3 (please note that the Readme is work in progress).

Together with the 4.3.0-rc1 release, we have also published the first alpha of Smack 4.4, which includes the updated and improved OMEMO API. Credits for this go to Paul.

As always, all the release artifacts are available on Maven Central.

Posts: 1

Participants: 1

Read full topic

June 13 2018

Monal IM: iOS 3.0.2 is out

I have released 3.0.2 to the iOS App Store.  So far I appear to have resolved the worst crashes.

June 12 2018

Monal IM: Update on OMEMO

I have an update on the status of OMEMO in Monal. I’ve completed my spike  and have a very rough implementation working. I am able to communicate with Gajim and Chatsecure.  I am actually using a lot of  the same OMEMO code as Chatsecure using Chris’ cocoapods.  The shared code base should reduce duplicated effort and ensure compatibility on the two main Apple platform clients going forward.

The current code isn’t anywhere near production but once I clean it up more, you should start seeing it as an option to turn on in Mac betas in the next month or so.  Below you can see my interactions with Gajim and Chatsecure. 

June 11 2018

Paul Schaub: Summer of Code: Evaluation and Key Lengths

The week of the first evaluation phase is here. This is the fourth week of GSoC – wow, time flew by quite fast this year :)

This week I plan to switch my OX implementation over to PGPainless in order to have a working prototype which can differentiate between sign, crypt and signcrypt elements. This should be pretty straight forward. In case everything goes wrong, I’ll keep the current implementation as a working backup solution, so we should be good to go :)

OpenPGP Key Type Considerations

I spent some time testing my OpenPGP library PGPainless and during testing I noticed, that messages encrypted and signed using keys from the family of elliptic curve cryptography were substantially smaller than messages encrypted with common RSA keys. I knew already, that one benefit of elliptic curve cryptography is, that the keys can be much smaller while providing the same security as RSA keys. But what was new to me is, that this also applies to the length of the resulting message. I did some testing and came to interesting results:

In order to measure the lengths of produced cipher text, I create some code that generates two sets of keys and then encrypts messages of varying lengths. Because OpenPGP for XMPP: Instant Messaging only uses messages that are encrypted and signed, all messages created for my tests are encrypted to, and signed with one key. The size of the plaintext messages ranges from 20 bytes all the way up to 2000 bytes (1000 chars).

Diagram comparing the lengths of ciphertext of different crypto systems

Comparison of Cipher Text Length

The resulting diagram shows, how quickly the size of OpenPGP encrypted messages explodes. Lets assume we want to send the smallest possible OX message to a contact. That message would have a body of less than 20 bytes (less than 10 chars). The body would be encapsulated in a signcrypt-element as specified in XEP-0373. I calculated that the length of that element would be around 250 chars, which make 500 bytes. 500 bytes encrypted and signed using 4096 bit RSA keys makes 1652 bytes ciphertext. That ciphertext is then base64 encoded for transport (a rule of thumb for calculating base64 size is ceil(bytes/3) * 4 which results in 2204 bytes. Those bytes are then encapsulated in an openpgp-element (adds another 94 bytes) which can be appended to a message. All in all the openpgp-element takes up 2298 bytes, compared to a normal body, which would only take up around 46 bytes.

So how do elliptic curves come to the rescue? Lets assume we send the same message again using 256 bit ECC keys on the curve P-256. Again, the length of the signcrypt-element would be 250 chars or 500 bytes in the beginning. OpenPGP encrypting those bytes leads to 804 bytes of ciphertext. Applying base64 encoding results in 1072 bytes, which finally make 1166 bytes of openpgp-element. Around half the size of an RSA encrypted message.

For comparison: I estimated a typical XMPP chat message body to be around 70 characters or 140 bytes based on a database dump of my chat client.

We must not forget however, that the stanza size follows a linear function of the form y = m*x+b, so if the plaintext size grows, the difference between RSA and ECC will become less and less significant.
Looking at the data, I noticed, that applying OpenPGP encryption always added a constant number to the size of the plaintext. Using 256 bit ECC keys only adds around 300 bytes, encrypting a message using 2048 bit RSA keys adds ~500 bytes, while RSA with 4096 bits adds 1140 bytes. The formula for my setup would therefore be y = x + b, where x and y are the size of the message before and after applying encryption and b is the overhead added. This formula doesn’t take base64 encoding into consideration. Also, if multiple participants -> multiple keys are involved, the formula is suspected to underestimate, as the overhead will grow further.

One could argue, that using smaller RSA keys would reduce the stanza size as well, although not as much, but remember, that RSA keys have to be big to be secure. An 3072 bit RSA key provides the same security as an 256 bit ECC key. Quoting Wikipedia:

The NIST recommends 2048-bit keys for RSA. An RSA key length of 3072 bits should be used if security is required beyond 2030.

As a conclusion, I propose to add a paragraph to XEP-0373 suggesting the use of ECC keys to keep the stanza size low.

June 06 2018

Paul Schaub: Summer of Code: PGPainless 2.0

In previous posts, I mentioned that I forked Bouncy-GPG to create PGPainless, which will be my simple to use OX/OpenPGP API. I have some news regarding that, since I made a radical decision.

I’m not going to fork Bouncy-GPG anymore, but instead write my own OpenPGP library based on BouncyCastle. The new PGPainless will be more suitable for the OX use case. The main reason I did this, was because Bouncy-GPG followed a pattern, where the user would have to know, whether an incoming message was encrypted or signed or both. This pattern does not apply to OX very well, since you don’t know, what content an incoming message has. This was a deliberate decision made by the OX authors to circumvent certain attacks.

Ironically, another reason why I decided to write my own library are Bouncy-GPGs many JUnit tests. I tried to make some changes, which resulted in breaking tests all the time. This might of course be a bad sign, indicating that my changes are bad, but in my case I’m pretty sure, that the tests are just a little bit over oversensitive :) For me it would be less work/more fun to create my own library, than trying to fix Bouncy-GPGs JUnit tests.

The new PGPainless is already capable of generating various OpenPGP keys, encrypting and signing data, as well as decrypting messages. I noticed, that using elliptic curve encryption keys, I was able to reduce the size of (short) messages by a factor of two. So recommending EC keys to implementors might be worth a thought. There is still a little bug in my code which causes signature verification to fail, but I’ll find it – and I’ll kill it.

Today I spent nearly 3 hours debugging a small bug in the decryption code. It turns out, that this code works like I intended,

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.nextObject();

while this code does not:

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.iterator().next();

The difference is subtle, but apparently deadly.

You can find the new PGPainless on my Gitea instance :)

Jérôme Poisson: Decentralized code forge, based on XMPP

With the recent announcement concerning the biggest known centralized code forge owner change, we have seen back here and there discussions about the creation of a similar tool, but decentralized.

I've used this occasion to recall the work done to implement tickets and merge requests in Salut à Toi (SàT), work which was relatively unoticed at the time of writing, about 6 months ago.

Now, I would like to bring some details on why building those tools.

First of all, why not the big forge? After all, a good part of current libre software is already using it! Well first it's not libre, and we commited ourself in our social contract to use libre software as much as possible, infrastructure included. Then because it's centralized, and there too our social contract is pretty clear, even if it's not as important for infrastructure as it is for SàT itself. Finally, because we are currently using Mercurial, and the most famous forge is build around Git.
We do not hide the fact that we already ask ourselves wether to use this platform or not in general assemblee (cf. minutes – in French –), we were mainly interested in the great visibility it can offer.

« It's centralized? But "Git" is decentralized! » is a point we are ofter hearing and it's a true fact, Git (and Mercurial, and some others) is decentralized. But a code forge is not the version control system, it's all the tools arount it: hosting, tickets, merge/pull requests, comments, wikis, etc. And those tools are not decentralized at the moment, and even if they are often usable throught a proprietary API, they are still under centralization rules, i.e. rules of the hosting service (and its technical hazards). This also means that if the service doesn't want a project, it can refuse, delete, or block it.

Centralization is also a technical facility to catalog and search project… which are on the service. Any external attempt will then have more difficulties to be visible and to attract contributors/users/help. This is a situation we know very well with Salut à Toi (we are not present on proprietary and centralized "social networks" for the same reasons), and we find it unacceptable. It goes without saying that concentrating projects on a single platform is the best way to contribute and exacerbate this state of affairs.
Please note, however, that we are not judging or attacking people and projects who made different choices. These positions are linked to our political commitment.

Why, then, not using existing Libre projects, already advanced and working, like Gitlab? Well, first because we are working with Mercurial and not Git, and secondly because we would put ourselves here too in a centralized solution. And there is an other point: there are nearly no decentralized forges (Fossil maybe?), and we already have nearly everything we need with SàT and XMPP. And let's add that there is some pleasure to build the tools we are lacking.

SàT is on the way to be a complete ecosystem, offering most, if not all, the tools needed to organise and communicate. But it is also generic and re-usable. That's why the "merge requests" system is not linked to a specific SCM (Git or Mercurial), it can be used with other software, and it is actually not only usable for code development. It's a component which will be used where it is useful.

To conclude this post, I would like to remind that if we want to see a decentralized, ethical and politically commited alternative to build our code, organise ourself, and communicate, we can make this real by cooperating and contributing, being with code, design, translations, documentation, testing, etc.
We got recently some help for packaging on Arch (thanks jnanar and previous contributors), and there are continuous efforts for packaging in Debian (thanks Robotux, Naha, Debacle, and other Debian XMPP packagers), if you can participate, please contact us (see our official website), together we can make the difference.
If you are lacking time, you can support us as well on Liberapay: Thanks in advance!

Monal IM: iOS 3.0.2 and OSX 2.1.2 betas out

I am still cleaning up all of the issues people have seen (and some old friends) in the latest releases. There are new betas out.  I will looking for feedback and crash reports. I hope to have the next updates out this week.   I know it has been almost weekly releases since the 3.0 release. Hoping to slow down to a more manageable  release cycle after the code is more stable.

June 05 2018

Erlang Solutions: MongooseIM 3.0.0 - Application turbocharger

MongooseIM 3.0.0 is out and with it come many improvements to our global messaging solution! Over the years we have proven that MongooseIM is the way to go when building a scalable, secure messaging system that never fails. With new features and fixes, our battle tested, highly customisable platform provides an enterprise friendly toolbox everyone can use. Whether you’re an XMPP expert or an entrepreneur looking to bring to life your idea for a community building app, MongooseIM platform helps you build a product tailored to your needs, that will easily grow to match your ambition. Find out what goodies we’ve managed to pack up for you this time and see how we can aid your users’ experience with our truly instant messaging platform.

What is so great about 3.0.0?!

As a team we’ve switched into a faster gear and our latest release is a reflection of that. MongooseIM 3.0.0 is an enterprise ready, stable solution that now works faster then ever; delivering a smooth messaging experience to your users. It features important upgrades that will allow your servers to process even more messages per minute and save memory space for additional user sessions. Everything thanks to a couple of improvements, the new XML parser being the prominent highlight.

Efficiency is not the only reason why we’re so proud of this release. It also features prototype Inbox implementation. It’s an essential extension for virtually every chat application. Our rich experience in this topic allowed us to design a solution that should match most use cases already and will be expended even further!

As usual, there are also other improvements we’d like to share with you. You may find the full list in our changelog but we’ve picked five of them to describe, as we feel you should learn more about them.

Achieve more with the same hardware

Thanks to several important changes and improvements, MongooseIM is now able to process information faster and consume less resources. It means your servers may handle more users and traffic with the MongooseIM upgrade alone.

Depending on your specific application, 25-400% better performance may be expected. Actually, the richer and more complex trafic, the better results you’ll get, compared to previous MongooseIM versions!

Three aspects of MongooseIM have been modified in order to achieve this:

  1. All messages from users are interpreted in a completely new way
  2. More users can connect to a single server per second
  3. All user sessions store as little information as possible when they are idle

Hello inbox

We’ve implemented Inbox features in the past for various projects and the time has come to pour the best ideas and experiences into an extension open for everyone!

A few words of explanation for those not familiar with the Inbox feature. It is the view in a chat application that you see every time you open it. It is a list of all conversations, with excerpts of last messages and unread messages count. Simple as that!

Unfortunately, as there is no official Inbox specification in XMPP yet, we’ve come up with custom protocol (thank you XMPP for your extensiveness!) for this purpose. We’re going to submit it as a XEP (XMPP Extension Protocol) for a review by community but in the meantime you can enjoy its simplicity and intuitiveness. All you need to do is to enable mod_inbox and implement a few simple IQ stanzas in your client application.

Please keep in mind though, this extension is still in experimental stage and will be marked as stable in one of our future releases.

Under the hood

We’ve also added some lower level changes that are going to be useful to developers, CTOs and devops.

Performance bundle: Acceptor pool, session hibernation, new XML parser

We’ve been using expat parser since the very beginning. Recently, our brave C++ warrior in the MongooseIM team thought of replacing it with an alternative - RapidXML. Everything indicated that it might consume less resources if properly used.

And he did use it in an excellent way. The whole C code was rewritten into C++ with the new library integrated. Since RapidXML requires only minimal per-user state (which actually is kept in Erlang terms), much fewer allocations are required and the code is simply cleaner.

What about the performance itself? For smaller stanzas the difference is not drastic, however noticeable. The test involved 100,000 users sending standard, small messages.

Please take a look at CPU usage graphs first:

Fig. 1: MongooseIM 2.2.2 CPU usage over time

Fig. 2: MongooseIM 3.0.0 CPU usage over time

Very similar but new version is a bit better in overall.

Regarding the memory consumption:

Fig. 3: MongooseIM 2.2.2 memory usage over time

Fig. 4: MongooseIM 3.0.0 memory usage over time

Now that’s already impressive. 3.0.0 uses ~2GB of RAM less which is over 25% improvement!

Wait, wait. If that was “impressive” then how do we call this? 1000 users exchanging large, ~36kB messages. CPU goes first again.

Fig. 5: MongooseIM 2.2.2 CPU usage over time

Fig. 6: MongooseIM 3.0.0 CPU usage over time

Well, it’s only 4 times better. :) In terms of memory usage the drop is less significant but still observable.

Fig. 7: MongooseIM 2.2.2 memory usage over time

Fig. 8: MongooseIM 3.0.0 memory usage over time

To wrap it up: all applications will surely benefit from a new parser but it’s especially important for those which process complicated, nested, rich stanzas.

Obviously, rewriting thousands of lines of code in C++ isn’t always the only method of improving performance. Sometimes it’s about small ideas, such as using a pool of acceptors or eager hibernation.

The former involves using more than one process to accept incoming connections from clients. Despite being a fairly cheap operation (it’s only about accepting a connection and creating a client process), this single process became a bottleneck in some applications. In 3.0, 100 acceptors are created by default.

The latter is bit more complicated. For people less familiar with Erlang this might sound similar to Java’s Hibernate framework. Actually, Erlang’s hibernation is not related to database mappings. When a process gets hibernated, its memory is garbage collected and it is removed from a scheduler queue (a bit of a simplification, but you get the idea). We already had such a mechanism in place but the hardcoded timeout for hibernation was 60s. When you think about it, client’s process is idle most of the time, at least in a computers’ world where a single CPU cycle takes less than 1ns. Load tests have proved that hibernating immediately after processing a stanza leads to lower memory usage at minimal cost to CPU time.

The first two graphs show CPU usage in a presence-based test. Most of the time ~6.5k users were connected and sending a presence update every 20 seconds to a roster of 8 friends.

Fig. 9: MongooseIM 2.2.2 CPU usage over time

Fig. 10: MongooseIM 3.0.0 CPU usage over time

Fairly similar I’d say. The second graphs shows memory usage decrease in the same test.

Fig. 11: MongooseIM 2.2.2 memory usage over time

Fig. 12: MongooseIM 3.0.0 memory usage over time

Now, that looks definitely better, right? However, if it turns out that frequent hibernations impact your server’s performance, you may easily tune the timeout in the configuration file (look for the hibernate_after option).

Improved ODBC support

Before 3.0, MongooseIM was using the odbc application from OTP to execute queries via ODBC. Unfortunately, this library is not maintained actively enough to match our requirements. It especially applies to SQL types support. Lucky for us, there is a community-developed repository named eodbc. With its help, MongooseIM’s compatibility with e.g. MSSQL improved significantly. What is more, in order to ensure it, we’ve begun testing ODBC connection with MSSQL on Travis!

A byproduct of this refactor is a completely new escaping API in our RDBMS layer. It’s more intuitive now and much less error prone. Now it’s virtually impossible to use unescaped value in a query and the escaping is always done appropriately for a chosen RDBMS.

Farewell, Message Archive Management v0.2!

Getting rid of MAM 0.2 support means several things:

Easier code maintenance:

  1. There were completely separate functions to handle 0.2 stanzas
  2. Without 0.2 we may limit the test count

< archived > element is no longer available. Please configure MongooseIM to inject < stanza-id > into messages

Despite its importance (MAM 0.2 was the first version supported by MongooseIM), it has become obsolete over time. If your application still uses MAM 0.2, we highly recommend you update your XMPP library and the code using it. Storage backend wise, newer versions are backwards compatible.

Despite its importance (MAM 0.2 was the first version supported by MongooseIM), it has become obsolete over time. If your application still uses MAM 0.2, we highly recommend you update your XMPP library and the code using it. Storage backend wise, newer versions are backwards compatible.


Please feel free to read the detailed changelog, where you can find a full list of source code changes and useful links.

What’s next?

After introducing big, important changes over past few releases, we’re going to take our time to polish what we already have.

Above all else, the Inbox feature is going to be expanded. We’re planning to support more backends and introduce new functions (e.g. sorting by timestamp). Also, we’d like to propose a proto XEP to the XMPP community, so the conversations list we’ve design may become an official standard common to clients and other servers. After all, MongooseIM is not our only priority, as we care for the state of XMPP as a communication protocol as well!

We’re slowly heading towards a configuration file revolution. It’s highlight will be a new config format, that will be friendlier for everyone not familiar with Erlang syntax. Currently we’re considering YAML, TOML, Cuttlefish and Conform. What is more, the configuration will undergo a major cleanup and flexibility improvements.

The third item I’d like to share with you applies to everyone working with MongooseIM code. One of the main structures in our server, Mongoose Accumulator (or mongoose_acc) is going to be significantly refactored. Our aim is to make it more intuitive and organised. Its content will be richer, with clear contract and scope. We’re redesigning it as you read these words. :)

Test our work on MongooseIM 3.0 and share your feedback

  1. Help us improve the MongooseIM platform:
  2. Star our repo: esl/MongooseIM
  3. Report issues: esl/MongooseIM/issues
  4. Share your thoughts via Twitter:
  5. Download Docker image with new release
  6. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  7. Check out our MongooseIM product page for more information on the MongooseIM platform.

June 04 2018

Tigase Blog: A note about hosted Vhosts

Greetings users of Tigase!

June 01 2018

Paul Schaub: Summer of Code: Command Line OX Client!

As I stated earlier, I am working on a small XMPP command line test client, which is capable of sending and receiving OpenPGP encrypted messages. I just published a first version :)

Creating command line clients with Smack is super easy. You basically just create a connection, instantiate the manager classes of features you want to use and create some kind of read-execute-print-loop.
Last year I demonstrated how to create an OMEMO-capable client in 200 lines of code. The new client follows pretty much the same scheme.

The client offers some basic features like adding contacts to the roster, as well as obviously OX related features like displaying fingerprints, generation, restoration and backup of key pairs and of course encryption and decryption of messages. Note that up to this point I haven’t implemented any form of trust management. For now, my implementation considers all keys whose fingerprints are published in the metadata node as trusted.

You can find the client here. Feel free to try it out, instructions on how to build it are also found in the repository.

Happy Hacking!

May 31 2018

JC Brand: 2018 Gulaschprogrammiernacht and organizing sprints for XMPP

Recently I attended the Gulaschprogrammiernacht for the first time.

It's a hacker/maker event in the Zentrum für Kunst und Medien (Centre for Arts and Media) in Karlsruhe, Germany.

AFAIK it's organized by the local chapter of the infamous Chaos Computer Club.

I heard about it from Daniel Gultsch on Twitter. It sounded like fun, so I decided to attend and spend the time adding OMEMO support to Converse.

Guus der Kinderen and I intended to organize an XMPP sprint for that weekend in Düsseldorf, but we were cutting it a bit fine with the organization, so I hoped that we could just shift the whole sprint to GPN.

Unfortunately Guus couldn't attend, but Daniel and Maxime Buquet (pep) did and I spent most of the event hanging out with them and working on XMPP-related stuff. The developers behind the Dino XMPP client also attended and hung out with us for a while. There was also someone working on writing an XMPP connector for Empathy in C++.

XMPP hackers at Gulaschprogrammiernacht

Maxime also worked on adding OMEMO support, to Poezio, and Daniel provided us with know-how and moral support. Daniel worked mainly on the Conversations Push Proxy.

We had some discussions around the value of holding regular sprints and I told them about my experience with sprints in the Plone community.

The Plone community regulary organizes sprints and they've been invaluable in getting difficult work done that no single company could or would sponsor internally. To me it's a beautiful example of what's been termed Commons-based peer production.

The non-profit Plone foundation provides funding and an official seal of approval to these sprints, and each sprint generally has a particular focus (such as adding Python3 support). Sprints can range from 3 people to 30 or more.

One difference between the Plone and XMPP communities, is that Plone is a single open source product on which multiple companies and developers build their businesses, whereas XMPP is a protocol upon which multiple companies and developers create multiple products, some open source and some closed source.

Another difference is between the Plone Foundation and the XMPP Standards Foundation. The XSF, for better or worse, interprets its role and function fairly strictly as being a standards organisation primarily focused on standardising extensions to XMPP, and less on community building or supporting software development.

Despite these differences, I still consider sprints offer a great way to foster community and to improve the extent and quality of XMPP software.

There is an interesting dynamic between cooperation and competition in both the Plone and XMPP communities. Participants compete with one another but they also have the shared goal of maintaining a healthy software ecosystem and they have common enemies (or competitors) in the form of competing products/protocols (either FOSS or proprietary).

Maxime was particularly excited by our discussion and very quickly put word into action by planning and announcing an [XMPP sprint in Cambridge, UK]( in August.

There's still time to vote on the date of the sprint and to suggest topics.

Hopefully this will be first of many more sprints and communit events.

Unmanned laptops at the Gulaschprogrammiernacht

Paul Schaub: Summer of Code: Polishing the API

The third week of coding is nearing its end and I’m quite happy with how my project turned out so far.

The last two days I was ill, so I haven’t got anything done during that period, but since I started my work ahead of time during the boding period, I think I can compensate for that :) .
Anyway, this week I created a second Manager class as another entry point to the API. This one is specifically targeted at the Instant Messaging use-case of XEP-0374. It provides methods to easily start encrypted chats with contacts and register listeners for incoming chat messages.

I’m still not 100% pleased by how I’m handling exceptions. PGPainless so far only throws a single type of exception, which might make it hard to determine, what exactly went wrong. This is something I have to change in the future.

Another thing that bothers me about PGPainless is the fact, that I have to know, how an OpenPGP message is constructed in order to process it. I have to know, that a message is encrypted and signed to then decrypt and verify it.
XEP-0373 does not specify some kind of marker that says “the following message is encrypted and signed”, so I have to modify PGPainless to provide a method that can process arbitrary OpenPGP messages and which tells me afterwards, whether the messages was signed and so on.

Compared to last years project I spent way more time on documenting my code this time. Nearly every public method has a beautiful green block of javadoc above its signature documenting what it does and how it should be used.
What I could do better though are tests. Last year my focus was on creating good JUnit and integration tests, while this time I only have the bare minimum of tests. I’ll try to go through my API together with Florian next week to find rough edges and afterwards create some more tests.

Happy Hacking!

Monal IM: iOS 3.0.1 Released, How is Push?

The patch release is out.  Search is restored and stability should be better.

While I am asking, how has push been in the latest clients? I have seen thousands of devices registered but I haven’t gotten a ton of feedback on how it has worked.

If you don’t have it working yet, you need an XEP-0357 module on your server.

Prosody: Cloud notify  

Ejabberd: mod push 

Openfire: Unknown, does anyone know if or how it supports XEP-0357 ?

Monal IM: Mac 2.1.1 out

The first patch update for 2.1 has been released to the App Store. iOS is waiting approval and should be available in a few hours.

May 28 2018

Monal IM: iOS Search Works Again

I appear to have forgotten to add search in the UI on the refactor. It has been re-added and has full iOS 11 and iPhone X support. 

Jérôme Poisson: File sharing landing in next release of Salut à Toi

Last big feature before the preparation of alpha release, file sharing is now available for Salut à Toi.

SàT has been able to send or receive files for years, either directly when 2 people are connecting at the same time, or via an HTTP upload on the server. It is now possible to share a file hierarchy, or in other words one or several directories. There are 2 main uses cases: using a component, or a client.

sharing a directory with Cagou

Sharing directory with client

The other way to use file sharing is from device to device. It can be used, for instance, to share pictures taken from your phone with your desktop computer, or to quickly give access to LibreOffice documents to your coworkers. To handle permissions, you just have to give the JIDs (XMPP identifiers) of allowed people.

The transfer is using Jingle technology, which will choose the best way to send the file. That means that if you are on the same local network (e.g. the previous case of sharing your phone picture with desktop computer, when you're at home), the connection will stay local, and the server will only see the signal (the data needed to establish the connection).

But if your devices are not on the same local area network, connection is still doable, and it will try to be direct when possible.

file sharing with a client

As you can see, it's pretty similar to the workflow with component.

Above you can see how easy it is to share a directory with Cagou, the desktop/Android frontend of Salut à Toi.

File sharing component

SàT can now act as a component (which is more or less a generic server plugin), and a first one allows a user to upload, list and retrieve files.

This is really handy when you want to keep some files private for later use (and access it from any device), or to share a photo album, for instance, with your family.

This is on the way to a service similar to "cloud storage", except that you may keep control on your data.

file sharing with a component

With the invitation system now available in SàT, you can even share with people without account.

Some notes

File transfer is currently unencrypted, but encryption is planed soon, either with OX (OpenPGP) or OMEMO.
The base feature is there and working, but some improvements are planed at more or less short term: quotas, files synchronization, e2e encryption, advanced search.


You'll find instruction on how to use this feature on the wiki.

Of course you'll need to use development version, don't hesitate to ask for help on SàT room : (or via browser).

A package is now available for Cagou on AUR for Arch Linux, thanks to jnanar.

Help needed!

SàT is a huge project, with a strong ethical root. It's unique in many ways, and needs a lot of work. You may help its success either by supporting us on Liberapay or by contributing (check official website or join our room for details).

Next post will be about alpha release, stay connected ;)

May 27 2018

Monal IM: iOS Crashes

As with any big update there are going to be bugs. I know 3.0 is not as stable as the last release. I am working quickly to fix every crash I see. So far the following have been fixed and will be updated next week . It is a long weekend in the US so these will likely come in by Wednesday. Sorry for the problems, know that I am fixing them ASAP. Things fixed so far:

  1. Crash when trying to save an account with no server has been replaced with an error message.
  2. Crash on iPads when retrying messages
  3. Crash on iPads when deleting account
  4. Crash on fetching message history
  5. Crash sometimes when receiving messages
  6. Crash sometimes when logging in

May 26 2018

Monal IM: Updates and GDPR

I am looking at the stats coming back and I see one particular crash that I would like to resolve ASAP in both iOS and Mac clients.  There will be an update next week after that unless something pressing comes up, development will pause as I sort through GDPR.  You may already have  noticed the cookie banner that appears on this page courtesy the wp-stats package I am using. The reason for XMPP work stopping: GDPR work is more work and one person only has so much free time in a day.

General GDPR roadmap:

  1. Site (done)
  2. Crashlyitcs
  3. Mac
  4. Push server
  5. iOS
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!