This is a continuation and the last piece of notes from PGConf.eu in Vienna this year. First day notes available here and second day notes available here.
I started the 3rd day with KaiGai Kohei, who talked about pb_strom in his presentation called GPGPU Accelerates PostgreSQL - Unlock the power of multi-thousand cores. Plugin pg_strom is a combination of PostgreSQL and gpgpu, which is a project in early state that aims to online native GPU code generation (nvrtc run-time compilier, cached, so compilation is done only once). It uses 9.5 feature of custom scan/join interface, that allows to overwrite part of the execution plan. Benchmarking showed approx. 5x speedup for queries that included 100M rows; join was more important than aggregate. Another benchmark showed some anomalies where pg_strom was slower for some queries.
https://github.com/pg-strom/devel
Ruben Gaspar Aparicio from Cern provided some experiences from his work as DBaaS. Except DB related information, some Cern statistics were also unbelieveble -- 6000+ magnets in The large hadron collider, which makes it the largest machine in the world. It produces 0.5PB data in one second during proton collision, totally 100PB stored now, CPU with 250K cores, 2M jobs per day. 8000 physicists require real-time access to all that LHC data from 160 computer centres in 35 countries.
Except 36 PostgreSQL 9.2 and 9.4 they use 226 MySQL and 8 Oracle in DB on demand (DBoD), then RAC with ~100 Oracle DBs. They support SSO in system, overview of instances DB providers offer different things, own DBs on demand is how the physicists work. The DBaaS supports upgrades, monitoring, backup and HA. However, there is no DBA support nor app support nor Vendor support. Everybody gets own database and must solve issues on their own. System supports back-up, configuration changes, PiTR (point in time recovery). Statistics of queries can be intuitively seen by uesrs that are no DB experts. Logs overview helps to fix problems by users themselves.
I was smiling about Ruben's quote that "Deleting is manual, because there are people who don't realize that when they delete something it is no more there."
New machines are installed by puppet and they run on physical servers + OpenStack as infrastructure. They already plan to use containers for running the DBs in Cern. They use Coverity and apps are usually written in Perl 5.20. Travis-CI is then used for running unit-testing.
Correct mount options are very important for proper performance (slave laging) of PostgreSQL.
Tools using: pg_upgrade, pg_snashot,pgbadger, pgreplay
Future vision with Containers (LXC): limit CPU+memory, cli access to instances.
https://github.com/cerndb/
Locked Up: Advances in Postgres Data Encryption by Vibhor Kumar summarized possibilities of encryption in PostgreSQL -- we can either encrypt in applications, pgcrypto modul or use pgp encryption.
Gianni Ciolli talked about repmgr in talk called Automate High Availability using repmgr 3. The tool supports cascading replications, clusterware, uses barman, pgBouncer, pg_rewind.
https://github.com/2ndQuadrant/repmgr/blob/master/CONTRIBUTING.md
Talk The Elephants In The Room: Limitations of the PostgreSQL Core Technology by Robert Haas direct I/O was then about dirty kernel page buffers when writing to disc, where PostgreSQL was not able to do almost anything. A need for full featured logincal replication solution was mentioned as something we really need (not only features).
Database size is bigger and bigger -- horizontal scalability seems to be the answer, i.e. the focus is in better sharding.
Other areas to consider: better parallel queries, changes to storage format, direct I/O, built-in connection pooler.
Simon Riggs closed the event with a keynote where he mentioned pgconf.eu 2015 to be the worldwide biggest event ever.
Zero-downtime upgrade is something PostgreSQL upstream wants to achieve when upgrading from 9.4 to 9.5 -- using cross-version replication. BDR (bi-directional replicatoin) is heading into postgres, which may allow multi-master replication in 9.7.
Simon mentioned Stonebraker's Vision who said that there will be either data warehouse database or transactional databases, but nothing will do both. PostgreSQL is proof that Stonebraker was wrong, because PostgreSQL is both. It was also mentioned that patched-PG projects like PgSQL-XL, that focus on scalability and that will be based on 9.5 at 1Q2016 may be several times faster than 9.6.
Dave Page, Magnus Hagander had the final word that belonged to 2nd Quadrant, which is becoming sustainable open shource development, customer funded company that provides Highly Available Support and RemoteDBA.
I already wrote it was my first PGConf.eu, but since the three days were full of very interesting topics and hallway was full of very interesting fans of PostgreSQL, I'm sure it was not the last one.
Honza's IT blog about Linux and open source software, where you can enjoy posts about databases, programming, scripting, Fedora and another geeky stuff.
Tuesday, November 24, 2015
Monday, November 23, 2015
Thoughts from PGConf.eu, PostgreSQL Europe Conference 2015, Day 2
This is a continuation of notes from PGConf.eu in Vienna this year. First day notes available here and third day notes available here.
I've started the second day with WAL, Standbys and Postgrs 9.5 by Michael Paquier, which was a morning talk about archiving and warm standby and how the postgresql 9.5 may help here.
Then there was my talk called Database containers in enterprise world. I mentioned there our experiences with containers preparation and how RH takes containers (not as VMs, but rather applications); that all was described rather in lower level of detail, but rather with bigger context. Not many people were presented (Robert Haas in the next room talking about Parallelism in postgresql was just too big concurence, just bad luck in timing), but those who were there, were already familiar with containers a bit, which turned into very nice discussion and many questions in the end. The questiones were mostly about differences between Nulecule and kubernetes templates, docker compose; further questions about Nulecule, that I also roughly presented.
VACUUM, Freezing & Avoiding Wraparound was a talk by Simon Riggs where he talked about high concurency, row visibility, deleting rows that make bloat. He explained the principle of vacuum, how long running queries influence it, how it works actually and why it is that important.
Duo Gabriele Bartolini and Marco Nenciarini talked about logging of huge PostgreSQL data and analysing them, which deserves proper tooling in the first place. Guys from 2nd quadrant were talking about experiences with elastic search + logstash + kibana (ELK stack) in a talk Integrating PostgreSQL with Logstash for real-time monitoring.
Linux tuning to improve PostgreSQL performance by Ilya Kosmodemiansky was a great overview around what everything in kernel we need to take into account when tuning postgresql performance. Most of it was about setting proper sizes of memory pages and buffers.
Tomas Vondra talked once again on PGConf.eu, this time about PostgreSQL Performance on EXT4, XFS, F2FS, BTRFS and ZFS after he did serveral benchmarks of different filesystems. Shortly, ZFS was quite poor in default settings, but it can be improved by fixing pages size. Otherwise, the results are quite comparable (XFS, EXT4 seemed the best though). BTRFS seemed quite bad in read-write bechmark, while XFS nad EXT4 were best again there.
http://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs-54525451
Second day was closed by 90 minutes of lightning talks, that have already become tradition on PostgreSQL conference and strict 5 minute rule makes it really interesting.
I've started the second day with WAL, Standbys and Postgrs 9.5 by Michael Paquier, which was a morning talk about archiving and warm standby and how the postgresql 9.5 may help here.
Then there was my talk called Database containers in enterprise world. I mentioned there our experiences with containers preparation and how RH takes containers (not as VMs, but rather applications); that all was described rather in lower level of detail, but rather with bigger context. Not many people were presented (Robert Haas in the next room talking about Parallelism in postgresql was just too big concurence, just bad luck in timing), but those who were there, were already familiar with containers a bit, which turned into very nice discussion and many questions in the end. The questiones were mostly about differences between Nulecule and kubernetes templates, docker compose; further questions about Nulecule, that I also roughly presented.
VACUUM, Freezing & Avoiding Wraparound was a talk by Simon Riggs where he talked about high concurency, row visibility, deleting rows that make bloat. He explained the principle of vacuum, how long running queries influence it, how it works actually and why it is that important.
Duo Gabriele Bartolini and Marco Nenciarini talked about logging of huge PostgreSQL data and analysing them, which deserves proper tooling in the first place. Guys from 2nd quadrant were talking about experiences with elastic search + logstash + kibana (ELK stack) in a talk Integrating PostgreSQL with Logstash for real-time monitoring.
Linux tuning to improve PostgreSQL performance by Ilya Kosmodemiansky was a great overview around what everything in kernel we need to take into account when tuning postgresql performance. Most of it was about setting proper sizes of memory pages and buffers.
Tomas Vondra talked once again on PGConf.eu, this time about PostgreSQL Performance on EXT4, XFS, F2FS, BTRFS and ZFS after he did serveral benchmarks of different filesystems. Shortly, ZFS was quite poor in default settings, but it can be improved by fixing pages size. Otherwise, the results are quite comparable (XFS, EXT4 seemed the best though). BTRFS seemed quite bad in read-write bechmark, while XFS nad EXT4 were best again there.
http://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs-54525451
Second day was closed by 90 minutes of lightning talks, that have already become tradition on PostgreSQL conference and strict 5 minute rule makes it really interesting.
Sunday, November 22, 2015
Thoughts from PGConf.eu, PostgreSQL Europe Conference 2015, Day 1
It was my first PGConf.eu and it was awesome. My bad that I waited so long to share some of the thoughts, but fixing it now with summary. Links to presentations are located at https://wiki.postgresql.org/wiki/PostgreSQL_Conference_Europe_Talks_2015.
The keynote by Tamara Atanasoska called The bigger picture: More then just code mentioned that community project is more about people, code is second. It was also stressed how important it is to be open to new-comers and users. Great eye opener for enybody involved in open-source.
Upcoming PostgreSQL 9.5 Features talk by Bruce Momjian involved these news: insert on update, aggregate functions enhancements, in-memory performance, JSON manipulation and operations and improvements in foreign data wrappers, which now can be part of sharding now. I really can't wait for PostgreSQL 9.5 GA.
Dockerizing a Larger PostgreSQL Installation: What could possibly go wrong? by Mladen Marinović was really something what I looked for, because containers is now the topic #1 for me. Mladen demonstrated using containers mostly as VM, basics of Docker were also introduced, since only half of the room were familiar with it. Problems with using cache during `docker build` and locales inside containers were mentioned, but the solutions was rather hacky to me (using date to trick the daemon, instead of straightforward --nocache). Problem with sending signals to process was alse mentioned, which is something we were looking at also during creation of containers for Open Shift.
Mladen uses Ansible for building images and uses several containers for specific actions -- e.g. a separate container for tools (dump/load). They support replication -- host_standby, restart_command (run in separate container) and run on two physical servers. Backups (pg_basebackup + compress) according retention policy - another container.
It was also mentioned that taking back-up from slave is not that easy. Monitoring, that all containers are alive, is done by HA SW. Problem with OOM were also mentioned, that system may kill your container processes, but solution was to use max_connections. Problem with freezing server was then solved by using timeout for every command. Transparent huge pages were mentioned as something not to use -- use normal huge pages instead. Finaly, upgrading to new major versions were mentioned as tough point always.
See more at: https://bitbucket.org/marin/pgconfeu2015/src
DBA's toolbelt by Kaarel Moppel was more a list of possible tools to look at when you mean it seriously with administration of PostgreSQL. So, just repeating the list may be interesting for someone.:
https://github.com/gulcin/pgconfeu2015
Let’s turn your PostgreSQL into columnar store with cstore_fdw by Jan Holčapek introduced interesting plugin, that may convert clasic row-based storage PostgreSQL engine into column storage database by utilizing foreign data wrapper concept. In some cases this may help a lot for the performance, once ready.
Performance improvements in 9.5 and beyond by Tomas Vondra was not only an interesting insight to particular areas that PostgreSQL hackers look at, but also nice motivation to look at aupgrading to 9.5. For example sorting speed-up up to 3-4x in comparisson to 9.4 in some cases is something I wouldn't really expect. Another comparisson of BRIN vs. BTREE indexes showed that performance is quite similar, but BRIN is much much much smaller. Another set of graphs showed how parallel scan allows to speed-up selects up to half of the time.
Notes from second day are here and from third day are here.
The keynote by Tamara Atanasoska called The bigger picture: More then just code mentioned that community project is more about people, code is second. It was also stressed how important it is to be open to new-comers and users. Great eye opener for enybody involved in open-source.
Upcoming PostgreSQL 9.5 Features talk by Bruce Momjian involved these news: insert on update, aggregate functions enhancements, in-memory performance, JSON manipulation and operations and improvements in foreign data wrappers, which now can be part of sharding now. I really can't wait for PostgreSQL 9.5 GA.
Dockerizing a Larger PostgreSQL Installation: What could possibly go wrong? by Mladen Marinović was really something what I looked for, because containers is now the topic #1 for me. Mladen demonstrated using containers mostly as VM, basics of Docker were also introduced, since only half of the room were familiar with it. Problems with using cache during `docker build` and locales inside containers were mentioned, but the solutions was rather hacky to me (using date to trick the daemon, instead of straightforward --nocache). Problem with sending signals to process was alse mentioned, which is something we were looking at also during creation of containers for Open Shift.
Mladen uses Ansible for building images and uses several containers for specific actions -- e.g. a separate container for tools (dump/load). They support replication -- host_standby, restart_command (run in separate container) and run on two physical servers. Backups (pg_basebackup + compress) according retention policy - another container.
It was also mentioned that taking back-up from slave is not that easy. Monitoring, that all containers are alive, is done by HA SW. Problem with OOM were also mentioned, that system may kill your container processes, but solution was to use max_connections. Problem with freezing server was then solved by using timeout for every command. Transparent huge pages were mentioned as something not to use -- use normal huge pages instead. Finaly, upgrading to new major versions were mentioned as tough point always.
See more at: https://bitbucket.org/marin/pgconfeu2015/src
DBA's toolbelt by Kaarel Moppel was more a list of possible tools to look at when you mean it seriously with administration of PostgreSQL. So, just repeating the list may be interesting for someone.:
- docs + pgconfig for configuring
- pgbench + pgbench-tools for bechmarking
- pg_activity (top-like) + pg_view + pgstats + pg_stat_statements + pgstattuple + plugins for monitoring systems (nagios) + postgresql-metricks (spitify) for monitoring
- pgBadger + pg_loggrep - log analyzing
- postgresql-toolkit - victorinox for postgresql dba
- pg-utils, pgx_scripts (https://github.com/pgexperts/pgx_scripts), acid-tools
- pg_locks, pg_stat_activity (locks)
- pgObserver, pgCluu, many others, based on what we need to do
- pgadmin4 on the way (backend and web frontend)
- developer: pgloader, pg_partman, pgpool-II, PL/Proxy
https://github.com/gulcin/pgconfeu2015
Let’s turn your PostgreSQL into columnar store with cstore_fdw by Jan Holčapek introduced interesting plugin, that may convert clasic row-based storage PostgreSQL engine into column storage database by utilizing foreign data wrapper concept. In some cases this may help a lot for the performance, once ready.
Performance improvements in 9.5 and beyond by Tomas Vondra was not only an interesting insight to particular areas that PostgreSQL hackers look at, but also nice motivation to look at aupgrading to 9.5. For example sorting speed-up up to 3-4x in comparisson to 9.4 in some cases is something I wouldn't really expect. Another comparisson of BRIN vs. BTREE indexes showed that performance is quite similar, but BRIN is much much much smaller. Another set of graphs showed how parallel scan allows to speed-up selects up to half of the time.
Notes from second day are here and from third day are here.
Saturday, November 21, 2015
How we've fixed upgrade path for CentOS 6 Software Collections
Short meesage for those who don't have time:
Software Collectoins for CentOS 6 are ready for upgrade from older rebuilds.
Now full story for those who care. Btw. this all is related to work done by SCLo SIG group that is part of CentOS (read more at http://wiki.centos.org/SpecialInterestGroup/SCLo).
A bit of history for beginning. Shortly after first RHSCL 1.0 release, CentOS rebuilds were prepared and since then they are available under:
http://mirror.centos.org/centos/6/SCL/x86_64/
However, keeping these rebuilds in sync with RHSCL content hasn't been easy task. With introduction of Java packages in collections, this task became even more tricky, which means these collections were not updated for long time. With that said, someone would expect there won't be problem with upgrade path, in other words that the new RPMs, that the SCLo SIG group is about to release, will update the older RPMs smoothly.
Well, not always. The original RPMs used ".el6.centos.alt" as %dist tag, while new builds use just ".el6" and that evolves in cases where python27-python-bson-2.5.2-4.el6.centos.alt.x86_64 > python27-python-bson-2.5.2-4.el6.x86_64, even if those packages have same Release tag in RPM SPEC. That obviously means the packages won't update smoothly.
Solution is quite simple in this case -- use higher Release in RPM SPEC. In some packages, this was already done, because some of the packages received update since original inclusion. In other cases we solve it by adding ".scX" (X is number) suffix to the Release tag. The ".scX" was chosen deliberately since ".scX.el6" is higher (alphabetically) than .el6.
Btw. for cases we need to build package more times before final build (bootstraping), we use suffix ".bsX", which means we can build package without any Release suffix in the end, because ".bsX.el6" < ".el6".
Anyway, this post was meant to let you know that upgrading of el6 packages from originally built RPMs is something we care about.
To verify it works, I've installed all the packages from original repository, then ran "yum update" and that evolved in proper update of all packages. I took that as proof it should work fine in your case as well. If there are still some issues, let us know.
Enjoy Software Collections on CentOS!
Software Collectoins for CentOS 6 are ready for upgrade from older rebuilds.
Now full story for those who care. Btw. this all is related to work done by SCLo SIG group that is part of CentOS (read more at http://wiki.centos.org/SpecialInterestGroup/SCLo).
A bit of history for beginning. Shortly after first RHSCL 1.0 release, CentOS rebuilds were prepared and since then they are available under:
http://mirror.centos.org/centos/6/SCL/x86_64/
However, keeping these rebuilds in sync with RHSCL content hasn't been easy task. With introduction of Java packages in collections, this task became even more tricky, which means these collections were not updated for long time. With that said, someone would expect there won't be problem with upgrade path, in other words that the new RPMs, that the SCLo SIG group is about to release, will update the older RPMs smoothly.
Well, not always. The original RPMs used ".el6.centos.alt" as %dist tag, while new builds use just ".el6" and that evolves in cases where python27-python-bson-2.5.2-4.el6.centos.alt.x86_64 > python27-python-bson-2.5.2-4.el6.x86_64, even if those packages have same Release tag in RPM SPEC. That obviously means the packages won't update smoothly.
Solution is quite simple in this case -- use higher Release in RPM SPEC. In some packages, this was already done, because some of the packages received update since original inclusion. In other cases we solve it by adding ".scX" (X is number) suffix to the Release tag. The ".scX" was chosen deliberately since ".scX.el6" is higher (alphabetically) than .el6.
Btw. for cases we need to build package more times before final build (bootstraping), we use suffix ".bsX", which means we can build package without any Release suffix in the end, because ".bsX.el6" < ".el6".
Anyway, this post was meant to let you know that upgrading of el6 packages from originally built RPMs is something we care about.
To verify it works, I've installed all the packages from original repository, then ran "yum update" and that evolved in proper update of all packages. I took that as proof it should work fine in your case as well. If there are still some issues, let us know.
Enjoy Software Collections on CentOS!
Friday, February 13, 2015
Thoughts and notes from Prague PostgreSQL Developers Day 2015
The day started with news in PostgreSQL 9.4 presented by Tomáš Vondra, now working for 2ndQuadrant. He talked about improvements in replication, GIN indexes, numeric aggregates, refreshing materialized views, altering system variables directly from daemon, ...
Then, Pavel Stěhule from GoodData gave us a bit more deep review of storing non-structural data within PostgreSQL (from historic implementations to the newest great JSONB new in 9.4). It was quite interesting to see that even traditional relational DB users think seriously about NoSQL solutions like MongoDB. It seems even advocates of those traditional SQL solutions see some advantages of new way to store data (NoSQL concepts).
It was also mentioned that NoSQL is not only understood as not-sql, but more often like not-only-sql and that while relational databases implement features to get closer to NoSQL, NoSQL databases implement features to get a bit closer to traditional SQL solutions (assuming implementing better ACID, reliability, ...)
Vratislav Beneš from OptiSolutions had a talk about unstructured and sensor data, generally about big data challenges. He presented comparison between PostgreSQL 9.4 (with JSONB) and MongoDB for the same type of work with unstructured data. His testing showed there are no big differences in performance, PostgreSQL is however quite better in space utilization (~3 times less data on disk). He closed his talk with a thought that everybody should choose a database carefully, according to a specific use case (e.g. if we care about other NoSQL features like out-of-the-box sharding/replication or we're fine with a bit more heavy-footed, but more reliable SQL solutions).
Aliaksandr Aliashkevich gave quite basic overview about sharding solutions for PostgreSQL, just read his shards.io web for more information, the presentation had basically a similar content.
A similar overview, this time about other open-source tools (dumping, upgrading, partitioning, logical replications) was given by Keith Fiske, actual author of the tools. They are all available under his username on github, so just look there or chech his site keithf4.com for more information.
Marc Balmer from micro systems spoke about securing PostgreSQL application, where he emphasized the need to not limit access only on application level, but also on database level. He didn't omit to give a basic overview about main database vulnerabilities on some specific examples (like SQL injection) but most of the presentation was about ways to secure data within a database from being abused by users that shouldn't have access for them. I think nobody who was there and pays some attention about security will ever use one superuser role to access DB again. Hopefully the slides will be soon at p2d2 site.
Petr Jelínek from 2ndQuadrant talked about a feature, that is heading to PostgreSQL 9.next (not sure yet if 9.5 mades it) -- BDR -- Bi Directional Replication. This basically implements multi-master capabilities and it is already working solution, which is patch-by-patch heading to PostgreSQL upstream. We know upstream is careful about any new feature, so even in this case it goes rather slowly, because upstream needs to be sure it is good for everybody (not only for specific use cases).
Štěpán Bechynský showed Amazon's AWS in practice, especially what are the steps to get a PostgreSQL machine in the cloud (basically PaaS variant). It must have been interesting for anybody who haven't seen how provisioning in cloud works from users' point of view, but for me, as I already saw OpenStack in action, I didn't actually learned much new. I heard from other attendees as well that there were some specific experiences missing -- like performance experiences, what are some specific issues someone new to cloud world experiences, etc.
Since PostgreSQL now supports foreign data wrappers, couple of interesting wrappers are available already. Those add some new functionality to the daemon itself and reminds me MySQL's engine architecture a bit. Jan Holcapek introduced on of those wrappers -- cstore_fwd, which adds columnar storage capabilities to PostgreSQL.
It was interesting to see that even on some non-complicated queries the EXPLAIN command showed several times less IO operations, comparing to native PostgreSQL. For more complicated use cases that may be even better, since going through columns directly for aggregation is much more effective than reading all the rows with most of the data read unnecessarily for the specific query.
Even if this particular foreign data wrapper doesn't support insert/update/delete features, it seemed very promising. It was also interesting when Jan asked audience what column database they use. Since MonetDB was mentioned not only once, it seems to be a good candidate to be packaged into Fedora. Who is volunteering for this?
Then, Pavel Stěhule from GoodData gave us a bit more deep review of storing non-structural data within PostgreSQL (from historic implementations to the newest great JSONB new in 9.4). It was quite interesting to see that even traditional relational DB users think seriously about NoSQL solutions like MongoDB. It seems even advocates of those traditional SQL solutions see some advantages of new way to store data (NoSQL concepts).
It was also mentioned that NoSQL is not only understood as not-sql, but more often like not-only-sql and that while relational databases implement features to get closer to NoSQL, NoSQL databases implement features to get a bit closer to traditional SQL solutions (assuming implementing better ACID, reliability, ...)
Vratislav Beneš from OptiSolutions had a talk about unstructured and sensor data, generally about big data challenges. He presented comparison between PostgreSQL 9.4 (with JSONB) and MongoDB for the same type of work with unstructured data. His testing showed there are no big differences in performance, PostgreSQL is however quite better in space utilization (~3 times less data on disk). He closed his talk with a thought that everybody should choose a database carefully, according to a specific use case (e.g. if we care about other NoSQL features like out-of-the-box sharding/replication or we're fine with a bit more heavy-footed, but more reliable SQL solutions).
Aliaksandr Aliashkevich gave quite basic overview about sharding solutions for PostgreSQL, just read his shards.io web for more information, the presentation had basically a similar content.
A similar overview, this time about other open-source tools (dumping, upgrading, partitioning, logical replications) was given by Keith Fiske, actual author of the tools. They are all available under his username on github, so just look there or chech his site keithf4.com for more information.
Marc Balmer from micro systems spoke about securing PostgreSQL application, where he emphasized the need to not limit access only on application level, but also on database level. He didn't omit to give a basic overview about main database vulnerabilities on some specific examples (like SQL injection) but most of the presentation was about ways to secure data within a database from being abused by users that shouldn't have access for them. I think nobody who was there and pays some attention about security will ever use one superuser role to access DB again. Hopefully the slides will be soon at p2d2 site.
Petr Jelínek from 2ndQuadrant talked about a feature, that is heading to PostgreSQL 9.next (not sure yet if 9.5 mades it) -- BDR -- Bi Directional Replication. This basically implements multi-master capabilities and it is already working solution, which is patch-by-patch heading to PostgreSQL upstream. We know upstream is careful about any new feature, so even in this case it goes rather slowly, because upstream needs to be sure it is good for everybody (not only for specific use cases).
Štěpán Bechynský showed Amazon's AWS in practice, especially what are the steps to get a PostgreSQL machine in the cloud (basically PaaS variant). It must have been interesting for anybody who haven't seen how provisioning in cloud works from users' point of view, but for me, as I already saw OpenStack in action, I didn't actually learned much new. I heard from other attendees as well that there were some specific experiences missing -- like performance experiences, what are some specific issues someone new to cloud world experiences, etc.
Since PostgreSQL now supports foreign data wrappers, couple of interesting wrappers are available already. Those add some new functionality to the daemon itself and reminds me MySQL's engine architecture a bit. Jan Holcapek introduced on of those wrappers -- cstore_fwd, which adds columnar storage capabilities to PostgreSQL.
It was interesting to see that even on some non-complicated queries the EXPLAIN command showed several times less IO operations, comparing to native PostgreSQL. For more complicated use cases that may be even better, since going through columns directly for aggregation is much more effective than reading all the rows with most of the data read unnecessarily for the specific query.
Even if this particular foreign data wrapper doesn't support insert/update/delete features, it seemed very promising. It was also interesting when Jan asked audience what column database they use. Since MonetDB was mentioned not only once, it seems to be a good candidate to be packaged into Fedora. Who is volunteering for this?
Tuesday, February 03, 2015
Fosdem notes alias what's happening in open-source world
I've had a great opportunity to be at Fosdem, enormous conference with several thousand of attendees that took place the last weekend in Brussels. The following are some notes from talks I attended.
Especially MultiCorn is something which gives quite wide range of options to python developers above PostgreSQL, it basically allows to write foreign data wrapper in python, so whatever data you can get with python (be it a text file, embedded DB, XML document or another database), you can then access it within database using SQL.
Schedule entry here.
It seemed also a bit interesting that the speaker didn't know about environment modules existence, so obviously he was not able to tell what is the difference.
But hearing some guys from audience that environment modules are great to solve many issues for them was quite interesting for me, especially related to Software Collections.
Schedule entry here.
It was a bit less technical talk, where also some interesting numbers were mentioned -- that devel count on MySQL project has been doubled and QE count tripled recently.
Even after personal straight question what happens with MySQL in the future I got quite straight answer that I shouldn't be afraid, it won't get closed more.
We also talked together with MySQL/MariaDB debian maintainer Otto and MariaDB architect Colin about need to bring packages in various distros closer, so we unify users' experience.
Schedule entry here.
It was interesting seeing when some of the latest SQL standard features are implemented in various databases. Usually IBM's DB2 or Oracle database supported the features even before or shortly after introducing it in the SQL standard, while PostgreSQL with MS Server usually implemented those features few years after.
And MySQL did not support those mentioned features almost ever, so it proofed that MySQL is better to simple use cases and less advanced (traditional) usage. On the other hand, PostgreSQL with MS Server follow Oracle DB's and DB2's development being few years behind.
Schedule entry here.
It was almost the same talk as the last year, except there are probably some bugs fixed. Anyway, the tool seems worth packaging to Fedora and playing with it at some point.
Schedule entry here.
Some simple things like indicating briefly, when closing a bug, that devels still want to collaborate, they just don't have enough data. It may be that simple, so users don't feel like they are ignored.
Another example with Gnome 3 and it's firstly problematic and later quite large adoption showed that even if devels do rapid changes in GUI, users may feel good, but only after some needed plugins have been introduced, which brought some of their favorite features back.
Another issue was that even if upstream is open for collaboration, people still need some borders, so setting up some guidelines similar to Tango Icon Theme Guidelines may help producing consistent output and the collaboration works much better with then.
The last thought was dedicated to need non-technical people, since devels are often reluctant to write necessary documentation, maintain proper social networks, etc.
These all are few topics everyone involved in OSS may think of.
Schedule entry here.
Python & PostgreSQL, a Wonderful Wedding
This was more or less summary of various ways how python and PostgreSQL may interact. There were couple of python libraries introduced (psycopg2, SQLAlchemy, Alembic), then some PostgreSQL python-related extensions (like PL/Python, MultiCorn).Especially MultiCorn is something which gives quite wide range of options to python developers above PostgreSQL, it basically allows to write foreign data wrapper in python, so whatever data you can get with python (be it a text file, embedded DB, XML document or another database), you can then access it within database using SQL.
Schedule entry here.
Nix, NixOS, NixOps
This seemed to be a bit chaotic demo about Nix* tools and OS. I expected to get some ideas about solving packaging-more-versions issue, but the talk was more focused on vagrant-like provisioning of virtual environments.It seemed also a bit interesting that the speaker didn't know about environment modules existence, so obviously he was not able to tell what is the difference.
But hearing some guys from audience that environment modules are great to solve many issues for them was quite interesting for me, especially related to Software Collections.
Schedule entry here.
Upstream Downstream: The relationship between developer and package maintainer
Talk given by Oracle guy who is btw. responsible to maintain relationships with mysql package maintainers in other distros. He tried to show Oracle is not that evil and that they want to fix bugs if they know about them (which is not happening in some cases).It was a bit less technical talk, where also some interesting numbers were mentioned -- that devel count on MySQL project has been doubled and QE count tripled recently.
Even after personal straight question what happens with MySQL in the future I got quite straight answer that I shouldn't be afraid, it won't get closed more.
We also talked together with MySQL/MariaDB debian maintainer Otto and MariaDB architect Colin about need to bring packages in various distros closer, so we unify users' experience.
Schedule entry here.
Modern SQL in PostgreSQL
This was half technical and half motivation talk which was supposed to convince developers to use latest features from SQL databases and also SQL standard. On few examples we saw like one keyword that is not known by majority of DB developers may help in readability or even performance.It was interesting seeing when some of the latest SQL standard features are implemented in various databases. Usually IBM's DB2 or Oracle database supported the features even before or shortly after introducing it in the SQL standard, while PostgreSQL with MS Server usually implemented those features few years after.
And MySQL did not support those mentioned features almost ever, so it proofed that MySQL is better to simple use cases and less advanced (traditional) usage. On the other hand, PostgreSQL with MS Server follow Oracle DB's and DB2's development being few years behind.
Schedule entry here.
ProxySQL : High Availability and High Performance Proxy for MySQL
ProxySQL is a promising tool that may help setting up even complicated deployments with various replication schemes, since the proxy server communicates either with MySQL server or another ProxySQL daemon with pure MySQL protocol. Thanks that feature it allows to do various things according to regular expressions defined to behave differently on various queries.It was almost the same talk as the last year, except there are probably some bugs fixed. Anyway, the tool seems worth packaging to Fedora and playing with it at some point.
Schedule entry here.
User-land and developer-land chat
Interesting talk about communication within open source project -- designers vs. devels vs. users. It was nice to see the opposite side of the relation, for example that users often don't understand why bug reports get closed and why some changes in design happens, which often leads to their frustration.Some simple things like indicating briefly, when closing a bug, that devels still want to collaborate, they just don't have enough data. It may be that simple, so users don't feel like they are ignored.
Another example with Gnome 3 and it's firstly problematic and later quite large adoption showed that even if devels do rapid changes in GUI, users may feel good, but only after some needed plugins have been introduced, which brought some of their favorite features back.
Another issue was that even if upstream is open for collaboration, people still need some borders, so setting up some guidelines similar to Tango Icon Theme Guidelines may help producing consistent output and the collaboration works much better with then.
The last thought was dedicated to need non-technical people, since devels are often reluctant to write necessary documentation, maintain proper social networks, etc.
These all are few topics everyone involved in OSS may think of.
Schedule entry here.
Subscribe to:
Posts (Atom)