Fear and loathing in MiddleWare
We were near the JavaScript, when we defeated php. I remember said something like, something my head is spinning. Maybe you better lead the project.

Suddenly, all around us there was a terrible boom... And the whole WEB cisal these articles about LAMP...
It seemed that they were written for any needs. They grew and absorbed the tasks for which previously used perl, bash and even C. there is No point in talking about these articles, I thought. All know that.
We had the fifth IE is a little Netscape, Opera, and a sea of motley cgi modules in perl. Not that it was a necessary technology stack. But if you start to collect crap, it becomes difficult to stop. The only thing that caused me concern a student who PHP. There is nothing more helpless and irresponsible and depraved than that. I knew that sooner or later we will get into that stuff.
theIn the beginning
Once JS between browsers was distinguished as a horse from the tube, and XMLHttpRequest was possible only through ActiveX. The release was not particularly. What is at the root was cut up dynamic web. IE 6 seemed the revolution. But if you wanted crossplatforming — your path led through bulk MiddleWare. A large number of articles on php, tools, and related the newly created libraries did the trick.
Damn php, begin to behave like the village drunkard in some early Irish novel total loss of basic motor functions, blurred vision, wooden language, the brain in horror. Interesting as when I see everything, but unable to control anything. We approach the task, clearly knowing where data is, where is the logic and where performance. But the place is different. You confuse the evil noodle code, and you think: what's the matter, what's happening?...
Gradually Titanic plate between browsers started adding up. There's XMLHttpRequest and dreams of the dynamic WEB with clear separation for the fronts became a reality.
of Course, I fell victim to the craze. Ordinary street loafer, who studied everything that came to hand. In the air hovered the idea of a single language for Web development. Just wait until you see these changes in a new guise, man.
— Come on Node.js view.
— What? No!
— You can't stop here. This is a thick Middle.
— Sit down.
What nonsense they rod, I don't know how to write JavaScript to find the similarities between Front, Middle and BackEnd? It's not a good option. It's too thick for the selected target.
But such thoughts I left. It's understanding that you can hammer a nail with a screwdriver, but if I reach for the hammer, the process will significantly speed up.
I felt an agonizing reappraisal of the whole situation.
There are also DBMS, such as PostgreSQL, allowing unimaginable to operate on data using inline functions. The business logic inside the base! The Real BackEnd. Don't like the business logic in the DBMS and she doesn't belong there? But such direct data feeds like?

Try to understand with the thorny traffic data on direct flows of the smoker.
First what we face is a constant distillation data in the middle, for any reason. In addition, these data may not be final and only be required to calculate subsequent requests. For data processing use cycles. Treatment cycles are not optimized for multithreading.
Multiple queries in one session to the DBMS leads to the inevitable apotheosis of death speed — mnogorjadnoj the database tables.

To exclude these factors, the logic has to move closer to the data, i.e. DBMS!
And this is the best way to bring speed to the new value!
Over time, browsers learned the full dynamic generation of the interface. Opened achivka clear separation of the fronts of the data and presentation. There was a full opportunity to drive from the server to the client mostly data. There was only one question: What to do with the middleware? How to minimize exclusively to the transport function with the elements of a simple routing of requests for collections on the server side.
These naive developers believed that it is possible to find inner peace and understanding, after studying one of the mainstays of the old Middle, and the result is the generation of a thick layer of, who never understood the essential old as the world the error of IT industry. Miss the belief that Someone or something may have fatal flaw.
By the time he started to work with nginx, and had no issues with what you need to prepare the intended idea. It was not the idea. Now it was a race of a programmer laziness. The idea is to try to describe the architecture of the traditional way, through apache, it seemed absurd.
In 2010, on github the module arrived ngx_postgres. Specify queries in the config file and the results in JSON/CSV/TOTOISE.
Configuration was as simple as SQL code. Difficult can be wrapped in a function that is then called from the config.
But this module is not able to load the http request body in a DBMS. That imposed serious limitations on the data sent to the server. The URL is clearly only suited to filter the query.
Serialization of the output was carried out by means of the module. What, from the height of my sofa, it seemed a senseless translation of RAM — was able to serialize and PostgreSQL.
There was an idea to correct, rewrite. I began to dig in the code of this module.
After a sleepless night, many searches would be enough ngx_postgres. We needed something stronger.
Madam, sir, baby, or whatever... the option is there, here, take ngx_pgcopy.

the
After the experiment of experiments and analysis of the tests, had the idea to rewrite everything from scratch practical. To speed up loading was selected queries COPY, which distilled the data into the database much faster inserts and contain its own parser. Unfortunately, due to the scarcity of descriptions of this type of queries, it is difficult to say how will conduct a DBMS particularly when the mass calls of this method.
there was madness in any direction and at any time, panic. But it was wonderful, universal sense of the correctness of everything we've done.
Along with COPY-machine requests we received in the CSV serialization in both directions, which has removed the need for concern about data conversion.
Let's start with the primitive send and receive an entire table to CSV URL:
CSV part of the import.export.nginx.conf
the

Currently, PostgreSQL cannot JSON and XML using COPY STDIN. I hope that there will come a day of humility with tabs in code-style elephant and I, or one thread, will find time and attach this functionality to COPY methods. Moreover, in a DBMS, the processing of these formats already present.
However! Method of application here and now is still there! Configurable nginx client_body_in_file_only on c subsequent use in the query the variable $request_body_file and transfer it to the function pg_read_binary_file...
Of course it will have to wrap the COPY method, because it will work only with my moped, because of complete body_rejecta have ngx_postgres. Other mopeds I have not yet met, and ngx_pgcopy is not yet ripe for additional functionality.
Consider how it looks for json/xml in import.export.nginx.conf
the
Yes, client_body_temp_path will have to put in the directory database, and the user to give ALTER SUPERUSER. In another case Postgres will receive our desires beyond the horizon.
Exports presented in the methods GET, uses built-in functions included in the standard distribution of Postgres. COPY all output to STDOUT, in case we want to notify the client about results of these actions. Import into fixed table(simple_data) looks a bit bigger exports, therefore, delivered in a user-defined procedure DBMS.
Part of 1.import.export.sql to import into fixed table
the
The import function with a Flex table to JSON is not particularly different from the above. But similar flexibility to XML generates more monstrous facial joints.
In the examples, the name of the table_name in the imported file does not affect the destination table the destination specified in nginx. The use of the xml hierarchy of the document table_name/rows/cols are caused exclusively by the symmetry with a built-in function table_to_xml.
The sets of data...
We're estranged, so back to the roots of clean COPY...
Okay. This is probably the only way. Just let me make sure I got it right. You want to send data between the server and the client without processing pouring them into a table?

I Think I caught the fear.
Nonsense. We came to find MidleWare dream.
And now that we're right in its vortex, you wanna leave?
You have to understand, man, we found the main nerve.
Yes, it almost is! It is the rejection CRUD.
Of course, many are outraged as some guy in a couple of sentences negates the results of California's minds, stylizing a short article under the freaky dialogues of the film. However, all is not lost. There is an option to transfer modifier data together with the data themselves. That still takes away from the usual RESTful architecture.
Besides, sometimes, and when, and often, theoretical studies are split on the rocks of reality. Such rocks is still the same ill-fated multipass. In fact, if you allow the change of the number of positions the user of the document, it is likely that these changes will include methods of several types. As a result, to send to the database a single document, you will need to spend a few separate http requests. And each http request will generate its modification and base your passage on the tables. Therefore, a qualitative breakthrough require fundamental changes in the rejection of the classical understanding of the CRUD methods. Progress requires sacrifice.
Then I suspect that it is possible to find a more interesting solution. You only need to experiment a bit and think...
the
By the time I asked the question,
Anyone who could answer it still wasn't there.
Yes, I'm starting again...
We send data in the middle layer, which is without processing sends them to the database.
The DBMS parses and puts them into a table that serves as a journal/log. Check all input data.
the Key moment, here it is! Journalist/logiraamat data flow out of the box! And then the case for triggers. They, on the basis of business logic to be solved: to update, add or do anything else. The binding of the modifier data is optional, but can be a nice bonus.
This leads us to the sufficiency of the use of HTTP methods GET and PUT. Let's try to simulate how to apply it. To start with the difference between log and log. The key difference is allocated through the priority between the value of route changes and an end value. The first criterion refers to the logs, second the logs.
Despite the priority the ultimate objective in the logs, the route is still needed. Which brings us to split these tables into two parts: the result of log.

What it is possible to pull? Logs: the route of the vehicle, the movement of material values, etc. Logs: inventory, recent commentary and other recent instantaneous state of the data.
Code of 2.jrl.log.sql:
The examples given are not able to clear the cells, as you need to tie the service value. And they strongly depend on the method of import. In some cases, it can be any type with completely arbitrary content. In others, only the type of the target column is likely a sign at the exit from the range.
Triggers are called right text sorting by name. I recommend to use prefixes trg_N_. From trg_0 to trg_4 to consider service, which will serve only the integrity of the overall logic and inbound filtering. And from 5 to 9 to be used for applied calculations. Nine triggers all quite!
It is also worth to say that they need to be installed on BEFORE INSERT. As in the case of c AFTER service variable NEW will be put in the table before the modification by the trigger. In principle, if integrity is to some extent not critical, this solution can be a good accelerator of user queries through the journal. This will not affect the values of the resulting table.
Yet, when AFTER we will not be able to return an error if the user does not have rights to modify. But correct FrondEnd and not to conduct prohibited by server operations. Accordingly, such behavior is rather typical hack that peacefully will be recorded in the log.
the

Marshrutizatora URL standard means of nginx. In the same way from the filtered query injection. After doubling problems, the code is similar to the result of asymmetric encryption put in the " map " Directive nginx.conf to retrieve digestible and safe SQL query. Which, in the subsequent filtered data.
There are some difficulties. They are caused by a lack in nginx syntax of regexps for multiple substitutions according to the type of sed s/bad/good/g. As a result, we...
Get right in the middle of this fucking cage. And for someone who has enough mind to write the damn words! A little more and they'll rip your brain to shreds.
up to 4 filters of equivalence at the URL
Horrowshow part of filters.nginx.conf
the
C filtering Cyrillic in the URL using the nginx configuration, everything is not smooth — you need a native converted from one variable with base64 to another, with human readable text. Currently, such directives are not. Which is odd, because in the nginx source, the conversion function is present.
As the thread will collect my thoughts and cope with this omission as a problem with sed, if the team is nginx inc will not solve.
It would be possible to give a url string with the arguments in DBMS for internal generate dynamic request in a direct function call or a call through a trigger log table. However, as such data is already saved in the nginx-access.log these initiatives redundant. And given the fact that such actions can increase the burden on the scheduler of the base, and even harmful.
theSm
— Modules for nginx for a long time and successfully written. What is fanfare?
Most of the existing analogues is a highly specialized solutions. The article presents a reasonable compromise of speed and flexibility!
— to Work through the disk(client_body_in_file_only) — slowly!
So come with your RAM Drive and his prophet — the file system cache.
— What rights for users?
Authorization with plain http it is forwarding in Postgres. Got that whole thing in. In General, a complete BackEnd.
encryption?
The ssl module using the nginx configuration. At this stage may not take off because of the damp ngx_pgcopy code.
Connection c postgres nginx, when you post servers that are paranoid can shoot through ssh.
— why the Heck the JS characters in the reflection points in the beginning? Where Is JavaScript?
JS goes to the FrontEnd. And this is just another movie.
— is There life on the client with JS disabled?
As you probably noticed earlier, in the examples, Postgres can in xml. I.e. to get the output HTML is not a problem. Like using spaghetti code and using xsl schema.
It's awful. However, all will be well. You are doing everything right.
How to resize pictures, packing arhivchik and consider the trajectory of the leptons on the GPU?

so simple.
Maybe I should chat with this guy, I thought.
Stay away from FastCGI!
They want us to do.
To lure us into this box,
Down in the basement. There.
Talking pdhu client_body_in_file_only on, take a bunch of $request_body_file and plperlu, with access to exchange line. And adapt that the thread of:
the
This is similar to CGI. But CGI is not safe!
Safety depends more on your expertise than the applied technology. Yes. After screwing the environment, is compatible with CGI. Moreover, it can be used for a smooth transition with CGI with minimal adjustment scripts. Accordingly, the method is suitable for the evacuation of most of the PHP solutions.
Still you can dream up on the topic of distributed computing(clustering with PostgreSQL) and the freedom of choice of approaches to asynchrony. But to do it, I certainly won't.
the
→ ngx_pgcopy
→ PostgreSQL COPY request
→ slim_middle_samples (sample article + Assembly demonstration)
the
The module is still in development, so there may be problems with stability. Due to me still not implemented the keep alive connections in the backend side, the role supersonic fighter, this creation is still limited to fit. README of the module you read.
PS. Actually, the CRUD without problems is implemented via stored procedures or by the journal method, the logs are not applicable. I the DELETE method in the module forgot to add.
This article uses footage and quotes from the movie "Fear and loathing in Las Vegas", 1998. Materials used exclusively for non-commercial purposes and within the framework of promoting cultural, educational and scientific development of society.
Article based on information from habrahabr.ru

Suddenly, all around us there was a terrible boom... And the whole WEB cisal these articles about LAMP...
It seemed that they were written for any needs. They grew and absorbed the tasks for which previously used perl, bash and even C. there is No point in talking about these articles, I thought. All know that.
We had the fifth IE is a little Netscape, Opera, and a sea of motley cgi modules in perl. Not that it was a necessary technology stack. But if you start to collect crap, it becomes difficult to stop. The only thing that caused me concern a student who PHP. There is nothing more helpless and irresponsible and depraved than that. I knew that sooner or later we will get into that stuff.
the
In the beginning wasn't
Once JS between browsers was distinguished as a horse from the tube, and XMLHttpRequest was possible only through ActiveX. The release was not particularly. What is at the root was cut up dynamic web. IE 6 seemed the revolution. But if you wanted crossplatforming — your path led through bulk MiddleWare. A large number of articles on php, tools, and related the newly created libraries did the trick.
Damn php, begin to behave like the village drunkard in some early Irish novel total loss of basic motor functions, blurred vision, wooden language, the brain in horror. Interesting as when I see everything, but unable to control anything. We approach the task, clearly knowing where data is, where is the logic and where performance. But the place is different. You confuse the evil noodle code, and you think: what's the matter, what's happening?...
Gradually Titanic plate between browsers started adding up. There's XMLHttpRequest and dreams of the dynamic WEB with clear separation for the fronts became a reality.
of Course, I fell victim to the craze. Ordinary street loafer, who studied everything that came to hand. In the air hovered the idea of a single language for Web development. Just wait until you see these changes in a new guise, man.
— Come on Node.js view.
— What? No!
— You can't stop here. This is a thick Middle.
— Sit down.
What nonsense they rod, I don't know how to write JavaScript to find the similarities between Front, Middle and BackEnd? It's not a good option. It's too thick for the selected target.
But such thoughts I left. It's understanding that you can hammer a nail with a screwdriver, but if I reach for the hammer, the process will significantly speed up.
I felt an agonizing reappraisal of the whole situation.
There are also DBMS, such as PostgreSQL, allowing unimaginable to operate on data using inline functions. The business logic inside the base! The Real BackEnd. Don't like the business logic in the DBMS and she doesn't belong there? But such direct data feeds like?

Try to understand with the thorny traffic data on direct flows of the smoker.
First what we face is a constant distillation data in the middle, for any reason. In addition, these data may not be final and only be required to calculate subsequent requests. For data processing use cycles. Treatment cycles are not optimized for multithreading.
Multiple queries in one session to the DBMS leads to the inevitable apotheosis of death speed — mnogorjadnoj the database tables.

To exclude these factors, the logic has to move closer to the data, i.e. DBMS!
And this is the best way to bring speed to the new value!
Over time, browsers learned the full dynamic generation of the interface. Opened achivka clear separation of the fronts of the data and presentation. There was a full opportunity to drive from the server to the client mostly data. There was only one question: What to do with the middleware? How to minimize exclusively to the transport function with the elements of a simple routing of requests for collections on the server side.
These naive developers believed that it is possible to find inner peace and understanding, after studying one of the mainstays of the old Middle, and the result is the generation of a thick layer of, who never understood the essential old as the world the error of IT industry. Miss the belief that Someone or something may have fatal flaw.
By the time he started to work with nginx, and had no issues with what you need to prepare the intended idea. It was not the idea. Now it was a race of a programmer laziness. The idea is to try to describe the architecture of the traditional way, through apache, it seemed absurd.
In 2010, on github the module arrived ngx_postgres. Specify queries in the config file and the results in JSON/CSV/TOTOISE.
Configuration was as simple as SQL code. Difficult can be wrapped in a function that is then called from the config.
But this module is not able to load the http request body in a DBMS. That imposed serious limitations on the data sent to the server. The URL is clearly only suited to filter the query.
Serialization of the output was carried out by means of the module. What, from the height of my sofa, it seemed a senseless translation of RAM — was able to serialize and PostgreSQL.
There was an idea to correct, rewrite. I began to dig in the code of this module.
After a sleepless night, many searches would be enough ngx_postgres. We needed something stronger.
Madam, sir, baby, or whatever... the option is there, here, take ngx_pgcopy.

the
NGX_PGCOPY
After the experiment of experiments and analysis of the tests, had the idea to rewrite everything from scratch practical. To speed up loading was selected queries COPY, which distilled the data into the database much faster inserts and contain its own parser. Unfortunately, due to the scarcity of descriptions of this type of queries, it is difficult to say how will conduct a DBMS particularly when the mass calls of this method.
there was madness in any direction and at any time, panic. But it was wonderful, universal sense of the correctness of everything we've done.
Along with COPY-machine requests we received in the CSV serialization in both directions, which has removed the need for concern about data conversion.
Let's start with the primitive send and receive an entire table to CSV URL:
http://some.server/csv/some_table
CSV part of the import.export.nginx.conf
the
pgcopy_server db_pub "host=127.0.0.1 dbname=testdb user=testuser password=123";
location ~/csv/(?&& lttable>[0-9A-Za-z_]+) {
pgcopy_query PUT db_pub "COPY $table FROM STDIN WITH DELIMITER as ';' null as ";";
pgcopy_query GET db_pub "COPY $table TO STDOUT WITH DELIMITER ';';";
}

Currently, PostgreSQL cannot JSON and XML using COPY STDIN. I hope that there will come a day of humility with tabs in code-style elephant and I, or one thread, will find time and attach this functionality to COPY methods. Moreover, in a DBMS, the processing of these formats already present.
However! Method of application here and now is still there! Configurable nginx client_body_in_file_only on c subsequent use in the query the variable $request_body_file and transfer it to the function pg_read_binary_file...
Of course it will have to wrap the COPY method, because it will work only with my moped, because of complete body_rejecta have ngx_postgres. Other mopeds I have not yet met, and ngx_pgcopy is not yet ripe for additional functionality.
Consider how it looks for json/xml in import.export.nginx.conf
the
client_body_in_file_only on;
client_body_temp_path /var/lib/postgresql/9.6/main/import.
location ~/json/(?&& lttable>[0-9A-Za-z_]+) {
pgcopy_query PUT db_pub
"COPY (SELECT * FROM import_json_to_simple_data('$request_body_file'))
TO STDOUT;";
pgcopy_query GET db_pub
"COPY (SELECT '['||array_to_string(array_agg(row_to_json(simple_data)),
',')||']' FROM simple_data) TO STDOUT;";
}
location ~/xml/(?&& lttable>[0-9A-Za-z_]+) {
pgcopy_query PUT db_pub
"COPY (SELECT import_xml_to_simple_data('$request_body_file') TO STDOUT;";
pgcopy_query GET db_pub
"COPY (SELECT table_to_xml('$table', false, false, ")) TO STDOUT";
}
Yes, client_body_temp_path will have to put in the directory database, and the user to give ALTER SUPERUSER. In another case Postgres will receive our desires beyond the horizon.
Exports presented in the methods GET, uses built-in functions included in the standard distribution of Postgres. COPY all output to STDOUT, in case we want to notify the client about results of these actions. Import into fixed table(simple_data) looks a bit bigger exports, therefore, delivered in a user-defined procedure DBMS.
Part of 1.import.export.sql to import into fixed table
the
CREATE OR REPLACE FUNCTION import_json_to_simple_data(filename TEXT)
RETURNS void AS $$
BEGIN
INSERT INTO simple_data
SELECT * FROM
json_populate_recordset(null::simple_data,
convert_from(pg_read_binary_file(filename), 'UTF-8')::json);
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION import_xml_to_simple_data(filename TEXT)
RETURNS void AS $$
BEGIN
INSERT INTO simple_data
SELECT (xpath('//s_id/text()', myTempTable.myXmlColumn))[1]::text::integer AS s_id,
(xpath('//data0/text()', myTempTable.myXmlColumn))[1]::text AS data0
FROM unnest(xpath('/*/*',
XMLPARSE(DOCUMENT convert_from(pg_read_binary_file(filename), 'UTF-8'))))
AS myTempTable(myXmlColumn);
END;
$$ LANGUAGE plpgsql;
The import function with a Flex table to JSON is not particularly different from the above. But similar flexibility to XML generates more monstrous facial joints.
Part of 1.import.export.sql for import into an arbitrary table
CREATE OR REPLACE FUNCTION import_vt_json(filename TEXT, target_table TEXT)
RETURNS void AS $$
BEGIN
EXECUTE format(
'INSERT INTO %I SELECT * FROM
json_populate_recordset(null::%I,
convert_from(pg_read_binary_file(%L), "UTF-8")::json)',
target_table, target_table, filename);
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION import_vt_xml(filename TEXT, target_table TEXT)
RETURNS void AS $$
DECLARE
columns_name TEXT;
BEGIN
columns_name := (
WITH
AS xml_file (
SELECT * FROM unnest(xpath(
'/*/*',
XMLPARSE(DOCUMENT
convert_from(pg_read_binary_file(filename), 'UTF-8'))))
--read tags from file
), columns_name AS (
SELECT DISTINCT (
xpath('name()',
unnest(xpath('//*/*', myTempTable.myXmlColumn))))[1]::text AS cn
AS xml_file FROM myTempTable(myXmlColumn)
--get target table cols name and type
), target_table_cols AS ( --
SELECT a.attname, t.typname, a.attnum, cn.cn
FROM pg_attribute a
LEFT JOIN pg_class c ON c.oid = a.attrelid
LEFT JOIN pg_type t ON t.oid = a.atttypid
LEFT JOIN columns_name AS cn ON cn.cn=a.attname
WHERE a.attnum > 0
AND c.relname = target_table --'log_data'
ORDER BY a.attnum
--prepare cols to output from xpath
), xpath_type_str AS (
SELECT CASE WHEN ttca.cn IS NULL THEN 'NULL AS '||ttca.attname
ELSE '((xpath("/*/'||attname||'/text()",
myTempTable.myXmlColumn))[1]::text)::'
||typname||' AS '||attname
END
AS xsc
FROM target_table_cols AS ttca
)
SELECT array_to_string(array_agg(xsc), ',') FROM xpath_type_str
);
EXECUTE format('INSERT INTO %s SELECT %s FROM unnest(xpath( "/*/*",
XMLPARSE(DOCUMENT convert_from(pg_read_binary_file(%L), "UTF-8"))))
AS myTempTable(myXmlColumn)', target_table, columns_name, filename);
END;
$$ LANGUAGE plpgsql;
In the examples, the name of the table_name in the imported file does not affect the destination table the destination specified in nginx. The use of the xml hierarchy of the document table_name/rows/cols are caused exclusively by the symmetry with a built-in function table_to_xml.
The sets of data...
simple_data_table.sql
CREATE TABLE simple_data (
s_id SERIAL,
data0 TEXT
);
data.csv
0;zero
1;one
data.json
[ {"s_id": 5, "data0": "five"},
{"s_id": 6, "data0": "six"} ]
data.xml
<simple_data xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<row>
<s_id>3</s_id>
<data0>three</data0>
</row>
<row>
<s_id>4</s_id>
<data0>four</data0>
</simple_data>
We're estranged, so back to the roots of clean COPY...
Okay. This is probably the only way. Just let me make sure I got it right. You want to send data between the server and the client without processing pouring them into a table?

I Think I caught the fear.
Nonsense. We came to find MidleWare dream.
And now that we're right in its vortex, you wanna leave?
You have to understand, man, we found the main nerve.
Yes, it almost is! It is the rejection CRUD.
Of course, many are outraged as some guy in a couple of sentences negates the results of California's minds, stylizing a short article under the freaky dialogues of the film. However, all is not lost. There is an option to transfer modifier data together with the data themselves. That still takes away from the usual RESTful architecture.
Besides, sometimes, and when, and often, theoretical studies are split on the rocks of reality. Such rocks is still the same ill-fated multipass. In fact, if you allow the change of the number of positions the user of the document, it is likely that these changes will include methods of several types. As a result, to send to the database a single document, you will need to spend a few separate http requests. And each http request will generate its modification and base your passage on the tables. Therefore, a qualitative breakthrough require fundamental changes in the rejection of the classical understanding of the CRUD methods. Progress requires sacrifice.
Then I suspect that it is possible to find a more interesting solution. You only need to experiment a bit and think...
the
entry Point
By the time I asked the question,
Anyone who could answer it still wasn't there.
Yes, I'm starting again...
We send data in the middle layer, which is without processing sends them to the database.
The DBMS parses and puts them into a table that serves as a journal/log. Check all input data.
the Key moment, here it is! Journalist/logiraamat data flow out of the box! And then the case for triggers. They, on the basis of business logic to be solved: to update, add or do anything else. The binding of the modifier data is optional, but can be a nice bonus.
This leads us to the sufficiency of the use of HTTP methods GET and PUT. Let's try to simulate how to apply it. To start with the difference between log and log. The key difference is allocated through the priority between the value of route changes and an end value. The first criterion refers to the logs, second the logs.
Despite the priority the ultimate objective in the logs, the route is still needed. Which brings us to split these tables into two parts: the result of log.

What it is possible to pull? Logs: the route of the vehicle, the movement of material values, etc. Logs: inventory, recent commentary and other recent instantaneous state of the data.
Code of 2.jrl.log.sql:
Tables
CREATE TABLE rst_data ( --Output/result table 1/2
s_id SERIAL,
data0 TEXT --Operating Data
data1 TEXT, --Operating Data
);
--Service variable with s_ prefix, ingoring input value, it will be setting trigers from
CREATE TABLE jrl_data ( --Input/journal table 2/2
SERIAL s_id, --Service variable, the Current ID of record
s_cusr TEXT, --Service variable, User name who created the record
s_tmc TEXT, --Service variable, Time when the record was created
p_trid INTEGER, --Service variable, Target ID/Parent in RST_(result) table
-- if exists for modification
data0 TEXT
data1 TEXT,
);
CREATE TABLE log_data ( --Input/output log table 1/1
s_id SERIAL,
s_cusr TEXT
s_tmc TEXT
pc_trid INTEGER, --Service variable, Target ID(ParentIN/ChilrdenSAVE)
-- in CURRENT table, if exists for modification
data0 TEXT
data1 TEXT,
);
Trigger for magazines
CREATE OR REPLACE FUNCTION trg_4_jrl() RETURNS trigger AS $$
update_result INTEGER := NULL;
target_tb TEXT :='rst_'||substring(TG_TABLE_NAME from 5);
BEGIN
--key::text,value::text
DROP TABLE IF EXISTS not_null_values;
CREATE TEMP TABLE not_null_values AS
SELECT key,value from each (*PostgreSQL hstore(NEW)) AS tmp0
INNER JOIN
information_schema.columns
ON information_schema.columns.column_name=tmp0.key
WHERE tmp0.key NOT LIKE 's_%'
AND tmp0.key <> 'p_trid'
AND tmp0.value IS NOT NULL
AND information_schema.columns.table_schema = TG_TABLE_SCHEMA
AND information_schema.columns.table_name = TG_TABLE_NAME;
IF NEW.p_trid IS NOT NULL THEN
EXECUTE (WITH keys AS (
SELECT (
string_agg((select key||'=$1.'||key from not_null_values), ','))
AS key)
SELECT format('UPDATE %s SET %s WHERE %s.s_id=$1.p_trid', target_tb keys.key target_tb)
FROM keys)
USING NEW;
END IF;
Update_result GET DIAGNOSTICS = ROW_COUNT;
IF NEW.p_trid IS NULL OR update_result=0 THEN
IF NEW.p_trid IS NOT NULL AND update_result=0 THEN
NEW.p_trid=NULL;
END IF;
EXECUTE format('INSERT INTO %s (%s) VALUES (%s) RETURNING s_id',
target_tb,
(SELECT string_agg(key, ',') from not_null_values),
(SELECT string_agg('$1.'||key, ',') from not_null_values))
USING NEW;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Trigger log
CREATE OR REPLACE FUNCTION trg_4_log() RETURNS trigger AS $$
BEGIN
IF NEW.pc_trid IS NOT NULL THEN
EXECUTE (
WITH
str_arg AS (
SELECT key AS key,
CASE WHEN value IS NOT NULL OR key LIKE 's_%' THEN key
ELSE NULL
END AS ekey,
CASE WHEN value IS NOT NULL OR key LIKE 's_%' THEN 't.'||key
ELSE TG_TABLE_NAME||'.'||key
END AS tkey,
CASE WHEN value IS NOT NULL OR key LIKE 's_%' THEN '$1.'||key
ELSE NULL
END AS value,
isc.ordinal_position
FROM each (*PostgreSQL hstore(NEW)) AS tmp0
INNER JOIN information_schema.columns AS isc
ON isc.column_name=tmp0.key
WHERE isc.table_schema = TG_TABLE_SCHEMA
AND isc.table_name = TG_TABLE_NAME
ORDER BY isc.ordinal_position)
SELECT format('WITH upd AS (UPDATE %s SET pc_trid=%L WHERE s_id=%L)
SELECT %s FROM (VALUES ('%s')) AS t(%s)
LEFT JOIN %s t ON.pc_trid=%s.s_id',
TG_TABLE_NAME, NEW.s_id, NEW.pc_trid,
string_agg (<'tkey, ','),
string_agg(value, ','),
string_agg(ekey, ','),
TG_TABLE_NAME, TG_TABLE_NAME)
FROM str_arg
) INTO NEW USING NEW;
NEW.pc_trid=NULL;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
The examples given are not able to clear the cells, as you need to tie the service value. And they strongly depend on the method of import. In some cases, it can be any type with completely arbitrary content. In others, only the type of the target column is likely a sign at the exit from the range.
Triggers are called right text sorting by name. I recommend to use prefixes trg_N_. From trg_0 to trg_4 to consider service, which will serve only the integrity of the overall logic and inbound filtering. And from 5 to 9 to be used for applied calculations. Nine triggers all quite!
It is also worth to say that they need to be installed on BEFORE INSERT. As in the case of c AFTER service variable NEW will be put in the table before the modification by the trigger. In principle, if integrity is to some extent not critical, this solution can be a good accelerator of user queries through the journal. This will not affect the values of the resulting table.
Yet, when AFTER we will not be able to return an error if the user does not have rights to modify. But correct FrondEnd and not to conduct prohibited by server operations. Accordingly, such behavior is rather typical hack that peacefully will be recorded in the log.
the
Refiltration and marshrutizatory

Marshrutizatora URL standard means of nginx. In the same way from the filtered query injection. After doubling problems, the code is similar to the result of asymmetric encryption put in the " map " Directive nginx.conf to retrieve digestible and safe SQL query. Which, in the subsequent filtered data.
There are some difficulties. They are caused by a lack in nginx syntax of regexps for multiple substitutions according to the type of sed s/bad/good/g. As a result, we...
Get right in the middle of this fucking cage. And for someone who has enough mind to write the damn words! A little more and they'll rip your brain to shreds.
up to 4 filters of equivalence at the URL
http://some.server/csv/table_name/*?col1=value&col2=value&col3=value&col4=value
Horrowshow part of filters.nginx.conf
the
#Prepare SQL filter
map $args $fst0 {
default "";
"~*(?<tmp00>[a-zA-Z0-9_]+=)(?<tmp01>[a-zA-Z0-9_+-.,:]+)(:?&(?<tmp10>[a-zA-Z0-9_]+=)(?<tmp11>[a-zA-Z0-9_+-.,:]+))?(:?&(?<tmp20>[a-zA-Z0-9_]+=)(?<tmp21>[a-zA-Z0-9_+-.,:]+))?(:?&(?<tmp30>[a-zA-Z0-9_]+=)(?<tmp31>[a-zA-Z0-9_+-.,:]+))?(:?&(?<tmp40>[a-zA-Z0-9_]+=)(?<tmp41>[a-zA-Z0-9_+-.,:]+))?" "$tmp00'$tmp01' AND $tmp10'$tmp11' AND $tmp20'$tmp21' AND $tmp30'$tmp31' AND $tmp40'$tmp41'";
#Check for correctness
map $fst0 $fst1 {
default "";
"~(?<tmp0>(:?[a-zA-Z0-9_]+='[a-zA-Z0-9_+-.,:]+'(?: AND )?)+)(:?( AND ")++)?" "$tmp0";
}
map $fst1 $fst2 {
default "";
"~(?<tmp0>[a-zA-Z0-9_+-=,." ]+)(?= AND *$)" "$tmp0";
}
#If the correctness checking is passed, appends the WHERE
map $fst2 $fst3 {
default "";
"~(?<tmp>.+)" "WHERE $tmp";
}
server {
location ~/csv/(?&& lttable>result_[a-z0-9]*)/(?<columns>\*|[a-zA-Z0-9,_]+) {
pgcopy_query GET db_pub
"COPY (select $columns FROM $table $fst3) TO STDOUT WITH DELIMITER ';';";
}
}
C filtering Cyrillic in the URL using the nginx configuration, everything is not smooth — you need a native converted from one variable with base64 to another, with human readable text. Currently, such directives are not. Which is odd, because in the nginx source, the conversion function is present.
As the thread will collect my thoughts and cope with this omission as a problem with sed, if the team is nginx inc will not solve.
It would be possible to give a url string with the arguments in DBMS for internal generate dynamic request in a direct function call or a call through a trigger log table. However, as such data is already saved in the nginx-access.log these initiatives redundant. And given the fact that such actions can increase the burden on the scheduler of the base, and even harmful.
the
Smokeall FAQ
— Modules for nginx for a long time and successfully written. What is fanfare?
Most of the existing analogues is a highly specialized solutions. The article presents a reasonable compromise of speed and flexibility!
— to Work through the disk(client_body_in_file_only) — slowly!
So come with your RAM Drive and his prophet — the file system cache.
— What rights for users?
Authorization with plain http it is forwarding in Postgres. Got that whole thing in. In General, a complete BackEnd.
encryption?
The ssl module using the nginx configuration. At this stage may not take off because of the damp ngx_pgcopy code.
Connection c postgres nginx, when you post servers that are paranoid can shoot through ssh.
— why the Heck the JS characters in the reflection points in the beginning? Where Is JavaScript?
JS goes to the FrontEnd. And this is just another movie.
— is There life on the client with JS disabled?
As you probably noticed earlier, in the examples, Postgres can in xml. I.e. to get the output HTML is not a problem. Like using spaghetti code and using xsl schema.
It's awful. However, all will be well. You are doing everything right.
How to resize pictures, packing arhivchik and consider the trajectory of the leptons on the GPU?

so simple.
Maybe I should chat with this guy, I thought.
Stay away from FastCGI!
They want us to do.
To lure us into this box,
Down in the basement. There.
Talking pdhu client_body_in_file_only on, take a bunch of $request_body_file and plperlu, with access to exchange line. And adapt that the thread of:
the
CREATE OR REPLACE FUNCTION foo(filename TEXT) RETURNS TEXT AS $$
return `/bin/echo-n "hello world!"`;
$$ LANGUAGE plperlu;
This is similar to CGI. But CGI is not safe!
Safety depends more on your expertise than the applied technology. Yes. After screwing the environment, is compatible with CGI. Moreover, it can be used for a smooth transition with CGI with minimal adjustment scripts. Accordingly, the method is suitable for the evacuation of most of the PHP solutions.
Still you can dream up on the topic of distributed computing(clustering with PostgreSQL) and the freedom of choice of approaches to asynchrony. But to do it, I certainly won't.
the
References
→ ngx_pgcopy
→ PostgreSQL COPY request
→ slim_middle_samples (sample article + Assembly demonstration)
the
WARRNING
The module is still in development, so there may be problems with stability. Due to me still not implemented the keep alive connections in the backend side, the role supersonic fighter, this creation is still limited to fit. README of the module you read.
PS. Actually, the CRUD without problems is implemented via stored procedures or by the journal method, the logs are not applicable. I the DELETE method in the module forgot to add.
This article uses footage and quotes from the movie "Fear and loathing in Las Vegas", 1998. Materials used exclusively for non-commercial purposes and within the framework of promoting cultural, educational and scientific development of society.
Комментарии
Отправить комментарий