Webinar November 10: Google Cloud Platform – MySQL at Scale
with Reliable HA

webinar Percona Google Cloud Platform

webinar Percona Google Cloud PlatformGoogle Cloud Platform
(GCP), with its CloudSQL offering, has become a leading platform
for database-as-a-service workload deployments for many
organizations. Scale and High Availability have surfaced as primary
goals for many of these deployments. Unfortunately, the attainment
of these objectives has been challenging.

Often, the answer has been to simply add more CloudSQL
databases. Many, however, have found a better solution in
Percona’s fully managed MySQL environment based in Google’s
GCE. Percona’s fully managed MySQL offering provides benefits
similar to CloudSQL, plus the ability to run MySQL with an
unlimited number of tables and much more reliable database
availability. This has empowered these organizations to reclaim
control over their architecture decisions. With Percona’s fully
managed MySQL, your database architecture choices are once again
based on your needs and the needs of your workload, rather than the
capabilities of the underlying platform. Learn more about GCP and
the benefits of Percona’s fully-managed database service to scale
and consolidate your databases.

Please join Stephen Thorn and Michal Nosek, Percona Solution
Engineers, on Tuesday, November 10th, at 1 pm EDT
for their webinar “Google Cloud Platform: MySQL at Scale
with Reliable HA
“.


Register for Webinar

If you can’t attend,
sign up anyway
and we’ll send you the slides and recording
afterward.

Annotations Provide Context to Timelines in Percona
Monitoring and Management

Annotations Percona Monitoring and Management

About the Author – This blog was written as a
collaboration with my colleague Jiri Ctvrtka. Jiri is a senior
software developer from Brno, Czech Republic, and has been
partnering with Percona for almost a year working on various
components of our Percona Platform. He’s been programming in Go
since 2015 and Jiri’s got a passion for simplicity, speed, and
precision of data and has focused that passion on understanding
impacts of changes and reimagined the Percona Monitoring and
Management (PMM) Annotations functionality.

“Does anyone remember what caused this large spike
last Thursday at 2:24 am?”

Annotations Percona Monitoring and ManagementWhat are Annotations?

Annotations are a way to provide context to timelines in

Percona Monitoring and Management
(PMM). For example, in bigger
teams, it is a good way to inform others about an event, or
important changes that may have occurred. It can contain any kind
of information but we see it most commonly used for indicating
there was an outage, maintenance window start/end, deployment of
new code, security event, etc.  Annotations in PMM help provide
quick explanations to peaks and valleys in graphs or indicate when
something took place on the timeline and create correlations.

 

Annotations in Percona Monitoring and Management

An example of annotations automatically added at the beginning
and end of a benchmark test.

 

Every annotation can be shown/hidden by a simple toggle button
presented in the filter options. So you don’t need to be worried
about crowded charts of annotations. You can toggle on/off and zoom
in or out around events to get better detail.

Filter contains a toggle button to turn on/off the visibility of
annotations.

How Can I Add Annotations?

Annotations can simply be added by the pmm-admin option:
annotate. So, let’s try it:

pmm-admin annotate “Deployed web version 2.4.6“

That command will place an annotation on every chart in every
node and service…maybe more than you need. What if you only
needed to add an annotation for a specific node or service to
indicate a broad network outage? Or add specific annotations for
specific nodes or services to indicate a software update was
applied? This is all possible as pmm-admin provides four flags,
which can be used for just this purpose.

--node = annotate the node the pmm-admin command was run on
--node-name = annotate node with specified name
--service = annotate all services running on the node the pmm-admin command was run on
--service-name = annotate service with specified name

All these flags can be combined together for options to annotate
more nodes or services by just only one command. The order of flags
doesn’t matter. Just imagine how many combinations you have! 
Even better, imagine how easily this can be integrated into our
CI/CD or deploy pipelines!

You can also add annotations via the API with curl commands:

curl 'http://admin:admin@localhost/graph/api/annotations' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"tags": ARRAY OF TAGS,"text":"TEXT"}'

ARRAY OF TAGS = Names of node or service, like
for node pmm-server and service pmm-server-mysql it will be
“tags: [“pmm-server”, “pmm-server-mysql”]”

Some Examples for Better Understanding

Case 1: We have a DB node we need to take
offline for maintenance and want to capture this on all its graphs
to explain the gap in reporting data.

pmm-admin annotate "`date` - System is going down for Maintenance - RFC-2232-2020" --node-name=”db1002”

via API:

curl 'http://admin:admin@localhost/graph/api/annotations' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"tags": ["db1002"],"text":"`date` - System is going down for Maintenance - RFC-2232-2020"}'

or if pmm-admin is running on this node then you don’t even
need to know the name of this node and can set it as part of the
shutdown routine.

pmm-admin annotate "`date` - System is being shut down/rebooted"

Case 2: We have node pmm-server and three
services running on the current node (mysql, postgres, mongo). So,
yeah, it’s simple right?

pmm-admin annotate “`date` - Apply configuration change - RFC-1009-2020“ –service-name=mysql
pmm-admin annotate “Restarting Postgres to apply security patch - “ –service-name=postgres
pmm-admin annotate “`date` - Service Disruption Reported via Support - SUP-239“ –service-name=mongo

via API:

curl 'http://admin:admin@localhost/graph/api/annotations' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"tags": ["mysql", "postgresl", "mongo"],"text":"`date` - Apply configuration change - RFC-1009-2020"}'

Or you can do it in one command:

pmm-admin annotate “Services Recycled to pickup config changes“ –service

And that’s it!  All services found running on that node will
be annotated.

Case 3: We have node “registrationDb” and
many services running on the current node. What if we want to
annotate that node and also every service running on this node by
just one command? Again, no problem:

pmm-admin annotate “`date` - Security alerted to possible event“ –node-name=registrationDb --service

via API:

curl 'http://admin:admin@localhost/graph/api/annotations' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"tags": ["registrationDb", "service1", "service2",...*],"text":"`date` - Security alerted to possible event"}'

* while the PMM admin command –service flag will add to all
services, you need to add all services names of the current node to
get the same result using the API but you can make an API call to
get all services on a given node

That’s it, no matter how many services you are running on node
registrationDb the annotations will be presented on all of them and
on node graphs as well.

Case 4: We have 100 services running on the
current node and also another node named pmm-server2 and we need to
annotate all 100 services on the current node (but not the current
node) along with node pmm-server2.  Simple:

pmm-admin annotate “`date` - Load Test Start“ –node-name=pmm-server2 --service

via API:

curl 'http://admin:admin@localhost/graph/api/annotations' -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"tags": ["pmm-server2", "service1", "service2",...*],"text":"`date` - Load Test - Increasing to 100 threads"}'

* while the PMM admin command –service flag will add to all
services, you need to add all services names of the current node to
get the same result using the API but you can make an API call to
get all services on a given node

The current node will not be annotated, but every service on
this node will be along with node pmm-server2.

Here’s a little guide to see many of the possible combinations
of flags and what will result:

–node = current node–node-name = node with name–node –node-name = current node and node with name–node –service-name = current node and service with
name–node –node-name –service-name = current node, node
with name, and service with name–node –service = current node and all services of
current node–node-name â€“service = all services of current node, node
with name–node –node-name â€“service = all services of current
node, node with name, and current node–service = all services of current node–service-name = service with name–service –service-name = all services of current node,
service with name–service –node-name = all services of current node and
node with name–service-name –node-name = service with name and node
with name–service –service-name –node-name = service with name,
all services on current node, and node with name

So thanks to annotations, correlating events on your servers is
now easier than ever.  We’d love to hear or even see how you’re
using annotations to make your life easier, hopefully, we’ve
given you some ideas to get started right away!


Download Percona Monitoring and Management Today

Automation and the Future of Modern Db2 Data
Management

Recently I was invited by BMC Software to participate in their
AMI Z Talk podcast
series
to talk about modern data management for Db2... and I
was happy to accept.

Anne Hoelscher, Director of R+D for BMC's Db2 solutions, and I
spent about 30 minutes discussing modern data management, the need
for intelligent automation, DevOps, the cloud, and how
organizations can achieve greater availability, resiliency, and
agility managing their mainframe Db2 environment.

Here's a link to the podcast that you can play right here in the
blog!

BMC AMI Z Talk ·
Episode 7: Automation and the Future of Modern Db2 Data
Management

Modern data management, to me, means flexibility, adaptability, and
working in an integrated way with a team. Today’s data
professionals have to move faster and more nimbly than ever before.
This has given rise to agile development and DevOps - and, as such,
modern DBAs participate in development teams. And DBA tasks and
procedures are integrated into the DevOps pipeline.

And as all of this DevOps adoption is happening, the amount of
data we store, and have to manage, continues to grow faster than
ever before.

These are just some of the challenges that Anne and I discuss
in this podcast... and at the end, Anne even asks me to predict the
future... 

I hope you'll take the time to listen to our discussion and
sharing your thoughts and issues regarding the resiliency and
agility required to succeed with modern data management and Db2 for
z/OS.

The post
Automation and the Future of Modern Db2 Data Management

appeared first on Planet
DB2
.

MySQL JSON_TABLE – Map a JSON object to a relational
database table

Introduction In this article, I’m going to explain how the
MySQL JSON_TABLE function works, and how you can use it to
transform a JSON object into a relational database table. When
using a relational database system, it’s best to design the
database schema according to the relational model. However, there
are certain use cases when the relational model is too strict, and
we are better off storing data in a JSON column type. For instance,
as I explained in this article, when designing an audit log table,
it’s much more convenient to... Read More

The post MySQL JSON_TABLE –
Map a JSON object to a relational database table
appeared first
on Vlad Mihalcea.

Galera Cluster for MySQL 5.6.49, 5.7.31, and 8.0.21
released

Codership is pleased to announce a new Generally Available (GA)
release of the multi-master Galera Cluster for MySQL 5.6, 5.7 and
8.0, consisting of MySQL-wsrep 5.6.49 (release
notes
, download),
5.7.31 (release
notes
, download),
and 8.0.21 (release
notes
, download)
with Galera Replication library 3.31 (release
notes
, download)
implementing wsrep API version 25 for 5.6 and 5.7, and Galera
Replication library 4.6 (release
notes
, download)
implementing wsrep API version 26 for 8.0. This release
incorporates all changes to MySQL 5.6.49, 5.7.31 , and 8.0.21
respectively, adding a synchronous option for your MySQL High
Availability solutions.

It is recommend that one upgrades their Galera Cluster for MySQL
5.6, 5.7 and 8.0 because it releases a fix for security
vulnerability CVE-2020-15180.
The binary tarball is also compiled with OpenSSL 1.1.1g.

A highlight of this release is that with MySQL 8.0.21, you will
now have access to using the Percona audit log plugin, which will
help with monitoring and logging connection and query activity that
has been performed on specific servers. This implementation is
provided as an alternative to the MySQL Enterprise Audit Log
Plugin.

In addition to fixing deadlocks that may occur between DDL and
applying transactions, in 8.0.21 the write-set replication patch is
now optimised to work with the Contention-Aware Transaction
Scheduling (CATS) algorithm that is present in InnoDB. You can read
more about
transaction scheduling
in the MySQL manual.

For those that requested the missing binary tarball package, the
MySQL 8.0.21 build includes just that. Packages continue to be
available for: CentOS 7 & 8, Red Hat Enterprise Linux 7 &
8, Debian 10, SLES 15 SP1, as well as Ubuntu 18.04 LTS and Ubuntu
20.04 LTS. The latest versions are also available in the FreeBSD
Ports Collection.

The Galera Replication library has had some notable fixes, one
of which improves memory usage tremendously. The in-memory GCache
index implementation now uses sorted std::deque instead of
std::map, and this leads to an eightfold reduction in
memory footprint
. Hardware CRC32 is now supported on
x86_64 and ARM64 platforms.

There are also three new status variables added:
wsrep_flow_control_active (to tell you whether flow cotrol is
currently active (replication paused) in the cluster),
wsrep_flow_control_requested (to tell you whether the node has
requested a replication pause because the received events queue is
too long) and wsrep_gmcast_segment (to tell you which cluster
segment the node belongs to).

For Galera Replication library 3.31, this is the last release
for Debian Jessie and openSUSE 15.0. For Galera Replication library
4.6, this is the last release for openSUSE 15.0. For MySQL-wsrep
5.6 and 5.7, this is also the last release for Debian Jessie. For
MySQL-wsrep 5.7 and MySQL-wsrep 8.0, this is the last release for
openSUSE 15.0.

IBM Cloud: Considerations for role and resource
setup

Solution architectureHere is a quick follow-up on my recent blog on
best practices for the cloud onboarding of enterprise projects.
There, I discussed how to use Terraform scripts as blueprint in the
onboarding process. Starting with a corporate standard for setting
up roles and other security objects, project-specific layers are
added later on. The IBM Cloud solution tutorial on applying end to

The post
IBM Cloud: Considerations for role and resource setup
appeared
first on Planet DB2.

MySQL Shell get_auto_increment_value() method – Python
mode

These days, I mostly program in PHP with MySQL as the database,
which is just fine by me. I have had a long-time interest in MySQL
(SQL in general), and after several steady months of programming in
PHP, I must say I have really come into the language and developed
a fondness for it. All that being said, I still enjoy using and
learning the MySQL Shell in Python mode. As Database Developers, we
often need the LAST INSERT ID value from a previous INSERT
statement on a column that has the AUTO_INCREMENT attribute. MySQL
Shell has a get_auto_increment_value() method we can call against a
Shell object result and retrieve that value. Continue reading and
see examples of the MySQL Shell get_auto_increment_value() method
used in Python mode…


large number 2 on green background
Photo by
Pierpaolo Riondato
on
Unsplash

OS and DB used:

  • Linux Mint 20 “Ulyana”
  • MySQL 8.0.21

Self-Promotion:

If you enjoy the content written here, by all means, share this
blog and your favorite post(s) with others who may benefit from or
like it as well. Since
coffee is my favorite drink, you can even buy me one if you would
like!

MySQL Shell get_auto_increment_value() method simple example

Getting started, I’ll create a practice table containing 2
columns: an ‘id’ column with the AUTO_INCREMENT attribute and a
‘name’ column of type VARCHAR, using the MySQL Shell in \py
mode. I’ll store the CREATE TABLE statement in a variable name
‘CREATE_TBL’ for easier processing:

mysql-py [learning]> CREATE_TBL = """
                      CREATE TABLE auto_test(id INTEGER
AUTO_INCREMENT,
                        name VARCHAR(20),
                        PRIMARY KEY at_id (id))
ENGINE=InnoDB"""

The CREATE TABLE command itself does not have a dedicated shell
method (at the time of writing that I am aware of). In order to
execute that type of MySQL command, we must call the sql() method
against a session object, providing the MySQL statement as the
argument:

mysql-py [learning]>
db.session.sql(CREATE_TBL).execute()
Query OK, 0 rows affected (2.0548 sec)

With the table named, ‘auto_test’, I’ll pass in
‘auto_test’ as the parameter to the get_table() method,
creating a working access handle on the table itself. We can see in
calling the count() method against the ‘auto_test’ table
object, that the ‘auto_test’ table as of now, is empty:

mysql-py [learning]> auto_test =
db.get_table('auto_test')
mysql-py [learning]> auto_test.count()
0

Using the insert() and values() methods, I’ll add a single row
to the ‘auto_test’ table, chaining on the execute() method and
completing the INSERT:

mysql-py [learning]>
auto_test.insert('name').values('Joshua').execute()
Query OK, 1 item affected (0.3200 sec)

Since the ‘id’ column has been created with the
AUTO_INCREMENT attribute, we can retrieve the LAST_INSERT_ID value
by executing the MySQL statement, SELECT LAST_INSERT_ID(). Again,
we need to use the sql() method for this query:

mysql-py [learning]> last_insert_id =
db.session.sql("SELECT LAST_INSERT_ID()").execute().fetch_one()
mysql-py [learning]> last_insert_id
[
    1
]

The output shows the value of 1 as the last inserted id.
Confirming with a follow-up select(), we see that is correct:

mysql-py [learning]> auto_test.select().execute()
+----+--------+
| id | name   |
+----+--------+
|  1 | Joshua |
+----+--------+
1 row in set (0.0009 sec)

Inherently, there is nothing wrong or incorrect in doing this.
However, there is another Shell type of way we can use.

Let’s insert another row into the ‘auto_test’ table:

mysql-py [learning]> result =
auto_test.insert('name').values('Jessica').execute()

However, this time around, we use the MySQL Shell
get_auto_increment_value() method and retrieve the most recently
inserted id:

mysql-py [learning]> last_insert_id =
result.get_auto_increment_value()
mysql-py [learning]> last_insert_id
2

In querying with the select() method, we see the expected
results:

mysql-py [learning]> auto_test.select().execute()
+----+---------+
| id | name    |
+----+---------+
|  1 | Joshua  |
|  2 | Jessica |
+----+---------+
2 rows in set (0.0009 sec)
MySQL Shell get_auto_increment_value() method with multiple
connections

Shown in the following example queries, we can see that the
MySQL Shell get_auto_increment_value() method works in other
sessions just as it should. Let’s establish another connection,
insert() a row, and then check the get_auto_increment_value():

--session 2
mysql-py [learning]> auto_test2 = db.get_table('auto_test')
mysql-py [learning]> result2 =
auto_test2.insert('name').values('Jeremy').execute()
mysql-py [learning]> sess_2_insert_id =
result2.get_auto_increment_value()
mysql-py [learning]> sess_2_insert_id
3

Executing this select(), we can see the last inserted id is
3:

--session 1
mysql-py [learning]> auto_test.select().execute()
+----+---------+
| id | name    |
+----+---------+
|  1 | Joshua  |
|  2 | Jessica |
|  3 | Jeremy  |
+----+---------+
3 rows in set (0.0009 sec)

Now back in session 1, I’ll execute() a multi-valued insert()
by chaining multiple values() methods together and assigning the
results to a result object named ‘another’:

mysql-py [learning]> another =
auto_test.insert('name').values('Malory').values('Manny').execute()
mysql-py [learning]> another.get_auto_increment_value()
4

In the query results above, the call to
get_auto_increment_value() shows 4 as the last inserted id, which
is for the ‘Malory’ row as that particular row is the first one
in the multi-row insert:

mysql-py [learning]> auto_test.select().execute()
+----+---------+
| id | name    |
+----+---------+
|  1 | Joshua  |
|  2 | Jessica |
|  3 | Jeremy  |
|  4 | Malory  |
|  5 | Manny   |
+----+---------+
5 rows in set (0.0011 sec)

Semi-related: I mentioned in the opening
paragraph that I program in PHP a great deal. Check out the post,
PHP
PDO lastInsertId() method with examples in MySQL
, where I cover
examples of retrieving the last insert id using PHP’s PDO.

Once again, the MySQL Shell is right on the mark, providing
DBA’s and Developers the get_auto_increment_value() for these
specific types of queries in which we need the value for that last
inserted id value.

Recommended Reading

I’ve written several blog posts on MySQL Shell Python mode so
do visit any of the below posts that you are interested in:

For in-depth information on the methods used in the post and
other MySQL Shell Python mode related material, visit the online
X DevAPI
User Guide for MySQL Shell in Python Mode
documentation.

As always, if you see any corrections in the code I need to make
or improve on, please let me know via the comments below.

MySQL Shell is a fantastic, alternative environment in which to
work with data. If you need to retrieve the last insert id from a
MySQL column with the AUTO_INCREMENT attribute, try the
get_auto_increment_value() method.

Like what you have read? See anything incorrect? Please comment
below and thank you for reading!!!

A Call To Action!

Thank you for taking the time to read this post. I truly hope
you discovered something interesting and enlightening. Please share
your findings here, with someone else you know who would get the
same value out of it as well.

Visit the Portfolio-Projects page to see blog
post/technical writing I have completed for clients.

To receive email notifications (Never Spam) from this blog
(“Digital Owl’s Prose”) for the latest blog posts as they are
published, please subscribe (of your own volition) by clicking the
‘Click To Subscribe!’ button in the sidebar on the homepage!
(Feel free at any time to review the Digital Owl’s Prose
Privacy Policy Page
for any questions you may have about: email
updates, opt-in, opt-out, contact forms, etc…)

Be sure and visit the “Best Of” page for a collection of my best
blog posts.

Josh Otwell has a passion to study and grow
as a SQL Developer and blogger. Other favorite activities find him
with his nose buried in a good book, article, or the Linux command
line. Among those, he shares a love of tabletop RPG games, reading
fantasy novels, and spending time with his wife and two
daughters.

Disclaimer: The examples presented in this post are hypothetical
ideas of how to achieve similar types of results. They are not the
utmost best solution(s). The majority, if not all, of the examples
provided, are performed on a personal development/learning
workstation-environment and should not be considered production
quality or ready. Your particular goals and needs may vary. Use
those practices that best benefit your needs and goals. Opinions
are my own.

The post
MySQL Shell get_auto_increment_value() method – Python mode

appeared first on Digital Owl's
Prose
.