• 周日. 11月 27th, 2022

5G编程聚合网

5G时代下一个聚合的编程学习网

热门标签

The battle history of redis and I in a billion level traffic

[db:作者]

1月 6, 2022

{“type”:”doc”,”content”:[{“type”:”heading”,”attrs”:{“align”:null,”level”:1},”content”:[{“type”:”text”,”text”:” One 、 background “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” One day, I received feedback from the upstream caller , One of the offers Dubbo Interface , It is fused for a short time at a fixed time every day , The exception information thrown is the provider dubbo The thread pool is depleted . At present dubbo Daily interface requests 18 100 million times , Error request 94W/ God , This is the beginning of the optimization journey .”,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:1},”content”:[{“type”:”text”,”text”:” Two 、 Quick response “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”2.1 Rapid positioning “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” First of all, carry out routine system information monitoring ( machine 、JVM Memory 、GC、 Threads ), It was found that although there was a slight spike , But within reason , And it doesn’t match with the time of reporting the error , Ignore for a moment .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Second, the flow analysis , It is found that there will be a sudden increase of traffic at a fixed time every day , The point of sudden increase in traffic is also consistent with the time point of error reporting , The preliminary judgment is that the short-term large flow leads to .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Traffic trends “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/ac/ace49acc5bc2b254ee731364a358ba75.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Degraded amount “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/3c/3cc063ebb1b1d762dfaf586260161959.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Interface 99 Line “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/54/547d97366352697cc45228e7ac06df72.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:1},”content”:[{“type”:”text”,”text”:” 3、 … and 、 Looking for performance bottlenecks “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”3.1 Interface process analysis “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.1.1 flow chart “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/e5/e5e5709e6adcc1bcd25a3a2290d99653.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.1.2 Process analysis “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”blockquote”,”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Call downstream interface after receiving request. , Use hystrix Fuse , The fusing time is 500MS;”,”attrs”:{}}]}],”attrs”:{}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”blockquote”,”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” According to the data returned by the downstream interface , Encapsulation of detail data , The first step is to get it from the local cache , If the local cache does not , From Redis Return to source ,Redis If there is none, return directly , The asynchronous thread returns to the source from the database .”,”attrs”:{}}]}],”attrs”:{}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”blockquote”,”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” If the first step calls the downstream interface exception , Then the data is disclosed , The process is to get the information from the local cache first , If the local cache does not , From Redis Return to source ,Redis If there is none, return directly , The asynchronous thread returns to the source from the database .”,”attrs”:{}}]}],”attrs”:{}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”3.2 Performance bottleneck investigation “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.2.1 The downstream interface service takes a long time “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Call chain display , Although the downstream interface P99 There is a spike in the peak flow , beyond 1S, But because of the setting of time-out ( Melting time 500MS,coreSize&masSize=50, Average time consumption of downstream interface 10MS following ), Judging the downstream interface is not the key point of the problem , To further eliminate interference , It can quickly fail when downstream services have spikes , Adjust the fusing time to 100MS,dubbo Timeout time 100MS.”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.2.2 Get details local cache no data ,Redis Back to the source “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” With the help of call chain platform , The first step is to analyze Redis Request traffic , To judge the hit rate of local cache , Find out Redis The traffic of is the interface traffic 2 times , In terms of design, this should not happen . Start code Review, We found that there was a problem with the logic .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Not read from local cache , But directly from Redis We got the data from ,Redis The maximum response time did find unreasonable spikes , Further analysis found that Redis Response time and Dubbo99 The situation of thread stabbing is basically the same , I feel that I have found the cause of the problem , I’m happy in my heart .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis Request traffic “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/0d/0d125fec49a6cbced8f8d36ffd34051d.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Service interface request traffic “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/fc/fcd9bf47452667d3fc34a9eb977d8b09.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Dubbo99 Line “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/ff/ffdfc17960caedcf2e1f7a40683c5ef9.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis Maximum response time “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/7a/7a9f3fe6b6a958699c613bc44f88a05b.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.2.3 Get the bottom data, no data in local cache ,Redis Back to the source “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” normal “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”3.2.4 Record the result of the request Redis”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Because the current Redis We’ve done resource isolation , And not in DB Slow log found in background , At this point, the analysis results in Redis There are many reasons for the slowdown , But everything else is ignored subjectively , Attention is asking Redis It’s time to double the traffic , Therefore, priority should be given 3.2.2 Problems in .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:1},”content”:[{“type”:”text”,”text”:” Four 、 Solution “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”4.1 3.3.2 Problems in positioning “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Before going online Redis Request quantity “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/0d/0d125fec49a6cbced8f8d36ffd34051d.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” After the launch Redis Request quantity “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/2f/2f0fc2d08c9af47015ad094c5b0343e5.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” After the launch Redis Traffic doubling problem solved ,Redis The maximum response time was relieved , But it still hasn’t been solved completely , It shows that large traffic query is not the most fundamental reason .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”redis Maximum response time ( Before going online )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/66/6667b0ca10a3d33453a731b4859676f9.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”redis Maximum response time ( After the launch )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/9a/9a6116968b85f1a456e107af0253fd26.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”4.2 Redis Capacity expansion “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” stay Redis After the abnormal traffic problem is solved , The problem has not been completely solved , What we can do now is to calm down , Carefully sort out the causes Redis The reason for the slow , The train of thought is mainly from the following three aspects :”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”bulletedlist”,”content”:[{“type”:”listitem”,”attrs”:{“listStyle”:null},”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” There are slow queries “,”attrs”:{}}]}]},{“type”:”listitem”,”attrs”:{“listStyle”:null},”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis Service performance bottleneck “,”attrs”:{}}]}]},{“type”:”listitem”,”attrs”:{“listStyle”:null},”content”:[{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Unreasonable client configuration “,”attrs”:{}}]}]}],”attrs”:{}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Based on the above ideas , Check one by one ; Inquire about Redis Slow query log , No slow queries found .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Borrowing call chain platform to analyze slow Redis command , Without the interference of slow query caused by large traffic , The problem location process is fast , A large number of time-consuming requests setex On the way , Occasionally slow requests for queries are also in setex After method , according to Redis The characteristic judgment of single thread setex yes Redis99 The prime culprit of thread stabbing . Find the specific statement , After positioning to specific business , First, apply for expansion Redis, from 6 individual master Expand to 8 individual master.”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/a8/a8ae8caf453f091ec260bd048ab25b9e.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”100%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/b4/b4cf1d4a23b53b06f4443f41ee1f9db7.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis Before the expansion “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/38/382e030c7a1779af34503278305e2eb6.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis After expansion “,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/d5/d5d81475112685699cc27fa20e4ccaaf.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” In terms of the results , The expansion basically has no effect , explain redis The service itself is not a performance bottleneck , At this point, the remaining one is the client related configuration .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:2},”content”:[{“type”:”text”,”text”:”4.3 Optimization of client parameters “,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:3},”content”:[{“type”:”text”,”text”:”4.3.1 Connection pool optimization “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis Expansion has no effect , Aiming at the possible problems of the client , There are two directions to the point of doubt .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” The first is that the client is processing Redis In cluster mode , There is a problem with the management of the connection BUG, The second is the unreasonable setting of connection pool parameters , At this point, source code analysis and connection pool parameter adjustment are carried out synchronously .”,”attrs”:{}}]},{“type”:”heading”,”attrs”:{“align”:null,”level”:4},”content”:[{“type”:”text”,”text”:”4.3.1.1 Judge whether there is any problem in client connection management BUG”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” At the end of the analysis , After the client processes the source code of the connection pool , No problem , It’s the same as expected , Cache connection pool by slot , The first hypothesis is ruled out , Source code is as follows .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”codeblock”,”attrs”:{“lang”:”java”},”content”:[{“type”:”text”,”text”:”1、setEx\n public String setex(final byte[] key, final int seconds, final byte[] value) {\n return new JedisClusterCommand(connectionHandler, maxAttempts) {\n @Override\n public String execute(Jedis connection) {\n return connection.setex(key, seconds, value);\n }\n }.runBinary(key);\n }\n \n2、runBinary\n public T runBinary(byte[] key) {\n if (key == null) {\n throw new JedisClusterException(\”No way to dispatch this command to Redis Cluster.\”);\n }\n \n return runWithRetries(key, this.maxAttempts, false, false);\n }\n3、runWithRetries\n private T runWithRetries(byte[] key, int attempts, boolean tryRandomNode, boolean asking) {\n if (attempts <= 0) {\n throw new JedisClusterMaxRedirectionsException(\"Too many Cluster redirections?\");\n }\n \n Jedis connection = null;\n try {\n \n if (asking) {\n // TODO: Pipeline asking with the original command to make it\n // faster….\n connection = askConnection.get();\n connection.asking();\n \n // if asking success, reset asking flag\n asking = false;\n } else {\n if (tryRandomNode) {\n connection = connectionHandler.getConnection();\n } else {\n connection = connectionHandler.getConnectionFromSlot(JedisClusterCRC16.getSlot(key));\n }\n }\n \n return execute(connection);\n \n }\n \n4、getConnectionFromSlot\n public Jedis getConnectionFromSlot(int slot) {\n JedisPool connectionPool = cache.getSlotPool(slot);\n if (connectionPool != null) {\n // It can't guaranteed to get valid connection because of node\n // assignment\n return connectionPool.getResource();\n } else {\n renewSlotCache(); //It's abnormal situation for cluster mode, that we have just nothing for slot, try to rediscover state\n connectionPool = cache.getSlotPool(slot);\n if (connectionPool != null) {\n return connectionPool.getResource();\n } else {\n //no choice, fallback to new connection to random node\n return getConnection();\n }\n }\n }","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"4.3.1.2 Analyze connection pool parameters ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Through communication with middleware team , And reference commons-pool2 The official document is amended as follows ;","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6c/6c3f477e3ac68b49eab7f189613c8287.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" After parameter adjustment ,1S The number of requests above has been reduced , But there are still , The amount of upstream feedback degradation is reduced from 0 90 About ten thousand a day 6W individual ( About maxWaitMillis Set to 200MS Why will there be more than one 200MS Request , There is an explanation below ).","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" After parameter optimization Reds Maximum response time ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c8/c86644c5436a2c5d01ebd9bf47d6e1ce.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Error amount of interface after parameter optimization ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/50/50c2fca0770e9bfe0175342f82af12a3.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4.3.2 Continue to optimize ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Optimization cannot stop , How to make Redis All write requests for are reduced to 200MS within , At this time, the optimization idea is to adjust the client configuration parameters , analysis Jedis Access to connection related source code ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Jedis Get the connection source code ","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":"final AbandonedConfig ac = this.abandonedConfig;\nif (ac != null && ac.getRemoveAbandonedOnBorrow() &&\n (getNumIdle() getMaxTotal() – 3) ) {\n removeAbandoned(ac);\n}\n\nPooledObject p = null;\n\n// Get local copy of current config so it is consistent for entire\n// method execution\nfinal boolean blockWhenExhausted = getBlockWhenExhausted();\n\nboolean create;\nfinal long waitTime = System.currentTimeMillis();\n\nwhile (p == null) {\n create = false;\n p = idleObjects.pollFirst();\n if (p == null) {\n p = create();\n if (p != null) {\n create = true;\n }\n }\n if (blockWhenExhausted) {\n if (p == null) {\n if (borrowMaxWaitMillis < 0) {\n p = idleObjects.takeFirst();\n } else {\n p = idleObjects.pollFirst(borrowMaxWaitMillis,\n TimeUnit.MILLISECONDS);\n }\n }\n if (p == null) {\n throw new NoSuchElementException(\n \"Timeout waiting for idle object\");\n }\n } else {\n if (p == null) {\n throw new NoSuchElementException(\"Pool exhausted\");\n }\n }\n if (!p.allocate()) {\n p = null;\n }\n\n if (p != null) {\n try {\n factory.activateObject(p);\n } catch (final Exception e) {\n try {\n destroy(p);\n } catch (final Exception e1) {\n // Ignore – activation failure is more important\n }\n p = null;\n if (create) {\n final NoSuchElementException nsee = new NoSuchElementException(\n \"Unable to activate object\");\n nsee.initCause(e);\n throw nsee;\n }\n }\n if (p != null && (getTestOnBorrow() || create && getTestOnCreate())) {\n boolean validate = false;\n Throwable validationThrowable = null;\n try {\n validate = factory.validateObject(p);\n } catch (final Throwable t) {\n PoolUtils.checkRethrow(t);\n validationThrowable = t;\n }\n if (!validate) {\n try {\n destroy(p);\n destroyedByBorrowValidationCount.incrementAndGet();\n } catch (final Exception e) {\n // Ignore – validation failure is more important\n }\n p = null;\n if (create) {\n final NoSuchElementException nsee = new NoSuchElementException(\n \"Unable to validate object\");\n nsee.initCause(validationThrowable);\n throw nsee;\n }\n }\n }\n }\n}\n\nupdateStatsBorrow(p, System.currentTimeMillis() – waitTime);\n\nreturn p.getObject();\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The general process of getting the connection is as follows :","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Whether there are idle connections , If there is a free connection, return directly , Create without ;","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" If the maximum number of connections is exceeded during creation , Then judge whether there are other threads creating the connection , If not, return directly , If so, wait maxWaitMis Time ( Other threads may fail to create ), If the maximum connection is not exceeded , To create a connection ( At this time, the waiting time for getting the connection may be greater than maxWaitMs).","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" If the creation is not successful , Then determine whether the connection is blocked , If not, throw an exception , Insufficient connection pool , If so, judge maxWaitMillis Is less than 0, If it is less than 0 Then block and wait , If it is greater than 0 Then block and wait maxWaitMillis.","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The next step is to determine whether a connection needs to be made according to the parameters check etc. .","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" According to the above process analysis ,maxWaitMills The current setting is 200, The total maximum blocking time of the above processes is 400MS, Most of the time 200MS, There should be no excess 400MS A sharp stab in the neck .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" At this point, the problem may arise in creating the connection , Because it takes time to create a connection , And the creation time is uncertain , Focus on whether there is such a scene , adopt DB Background monitoring Redis Connection .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"DB Background monitoring Redis Service connection ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5a/5a7758cfdc0bc6acfd957975957eb91c.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Analysis of the above figure shows that , It's really in a few minutes (9:00,12:00,19:00…),redis There is an increase in the number of connections , Follow Redis The stab time was basically the same . Feeling ( After all the previous attempts , I'm not sure ) The problem is clear ( When the sudden increase of flow comes , Connection pool available connections can not meet the demand , The connection is created , Cause request waiting ).","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The idea is to create a connection pool when the service starts , Minimize the creation of new connections , Modify connection pool parameters vivo.cache.depend.common.poolConfig.minIdle, It turned out to be ineffective ???","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Don't say anything , Start rolling source code ,jedis The bottom layer uses commons-poll2 To manage the connection , View the commons-pool2-2.6.2.jar Part of the source code ;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CommonPool2 Source code ","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":"public GenericObjectPool(final PooledObjectFactory factory,\n final GenericObjectPoolConfig config) {\n \n super(config, ONAME_BASE, config.getJmxNamePrefix());\n \n if (factory == null) {\n jmxUnregister(); // tidy up\n throw new IllegalArgumentException(\”factory may not be null\”);\n }\n this.factory = factory;\n \n idleObjects = new LinkedBlockingDeque(config.getFairness());\n \n setConfig(config);\n}”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” I found that there was no initial connection , Start consulting middleware team , Middleware team gives the source code (commons-pool2-2.4.2.jar) as follows , One more time after method execution startEvictor Method call ?”,”attrs”:{}}]},{“type”:”codeblock”,”attrs”:{“lang”:”java”},”content”:[{“type”:”text”,”text”:”1、 Initialize connection pool \npublic GenericObjectPool(PooledObjectFactory factory,\n GenericObjectPoolConfig config) {\nsuper(config, ONAME_BASE, config.getJmxNamePrefix());\nif (factory == null) {\n jmxUnregister(); // tidy up\nthrow new IllegalArgumentException(\”factory may not be null\”);\n }\nthis.factory = factory;\n idleObjects = new LinkedBlockingDeque<PooledObject>(config.getFairness());\n setConfig(config);\n startEvictor(getTimeBetweenEvictionRunsMillis());\n }”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Why not ??? Start checking Jar package , The versions are different , The version of middleware is given in the V2.4.2, What is actually used in the project V2.6.2, analysis startEvictor One step of the logic is to deal with the connection pool preheating logic .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Jedis Connection pool preheating “,”attrs”:{}}]},{“type”:”codeblock”,”attrs”:{“lang”:”java”},”content”:[{“type”:”text”,”text”:”1、final void startEvictor(long delay) {\n synchronized (evictionLock) {\n if (null != evictor) {\n EvictionTimer.cancel(evictor);\n evictor = null;\n evictionIterator = null;\n }\n if (delay > 0) {\n evictor = new Evictor();\n EvictionTimer.schedule(evictor, delay, delay);\n }\n }\n }\n2、class Evictor extends TimerTask {\n /**\n * Run pool maintenance. Evict objects qualifying for eviction and then\n * ensure that the minimum number of idle instances are available.\n * Since the Timer that invokes Evictors is shared for all Pools but\n * pools may exist in different class loaders, the Evictor ensures that\n * any actions taken are under the class loader of the factory\n * associated with the pool.\n */\n @Override\n public void run() {\n ClassLoader savedClassLoader =\n Thread.currentThread().getContextClassLoader();\n try {\n if (factoryClassLoader != null) {\n // Set the class loader for the factory\n ClassLoader cl = factoryClassLoader.get();\n if (cl == null) {\n // The pool has been dereferenced and the class loader\n // GC’d. Cancel this timer so the pool can be GC’d as\n // well.\n cancel();\n return;\n }\n Thread.currentThread().setContextClassLoader(cl);\n }\n \n // Evict from the pool\n try {\n evict();\n } catch(Exception e) {\n swallowException(e);\n } catch(OutOfMemoryError oome) {\n // Log problem but give evictor thread a chance to continue\n // in case error is recoverable\n oome.printStackTrace(System.err);\n }\n // Re-create idle instances.\n try {\n ensureMinIdle();\n } catch (Exception e) {\n swallowException(e);\n }\n } finally {\n // Restore the previous CCL\n Thread.currentThread().setContextClassLoader(savedClassLoader);\n }\n }\n }\n3、 void ensureMinIdle() throws Exception {\n ensureIdle(getMinIdle(), true);\n }\n4、 private void ensureIdle(int idleCount, boolean always) throws Exception {\n if (idleCount < 1 || isClosed() || (!always && !idleObjects.hasTakeWaiters())) {\n return;\n }\n \n while (idleObjects.size() < idleCount) {\n PooledObject p = create();\n if (p == null) {\n // Can’t create objects, no reason to think another call to\n // create will work. Give up.\n break;\n }\n if (getLifo()) {\n idleObjects.addFirst(p);\n } else {\n idleObjects.addLast(p);\n }\n }\n if (isClosed()) {\n // Pool closed while object was being added to idle objects.\n // Make sure the returned object is destroyed rather than left\n // in the idle object pool (which would effectively be a leak)\n clear();\n }\n }”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” modify Jar edition , Configuration center added vivo.cache.depend.common.poolConfig.timeBetweenEvictionRunsMillis( Check for free connections in the connection pool once , Spend more free time than minEvictableIdleTimeMillis Millisecond disconnection , Until the number of connections in the connection pool reaches minIdle until ).”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”vivo.cache.depend.common.poolConfig.minEvictableIdleTimeMillis( The time that connections in the connection pool can be idle , millisecond ) Two parameters , After restarting the service , Normal preheating of connection pool , Finally from Redis Solve the problem at the same level .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” The optimization results are as follows , The performance problem has been basically solved ;”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis response time ( Before optimization )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/c8/c86644c5436a2c5d01ebd9bf47d6e1ce.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis response time ( After optimization )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/6d/6d48f2593b278d12f1666ffb9179e920.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Interface 99 Line ( Before optimization )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/ff/ffdfc17960caedcf2e1f7a40683c5ef9.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Interface 99 Line ( After optimization )”,”attrs”:{}}]},{“type”:”image”,”attrs”:{“src”:”https://static001.geekbang.org/infoq/f5/f5064391a33cddc219f94fea7191f6c5.png”,”alt”:null,”title”:””,”style”:[{“key”:”width”,”value”:”75%”},{“key”:”bordertype”,”value”:”none”}],”href”:””,”fromPaste”:false,”pastePass”:false}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”heading”,”attrs”:{“align”:null,”level”:1},”content”:[{“type”:”text”,”text”:” 5、 … and 、 summary “,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” When there is an online problem , The first thing to consider is fast recovery of online business , Minimize business impact , So for online business , Do a good job of current limiting in advance 、 Fuse 、 Demotion and other strategies , When there is a problem online, you can quickly find a recovery solution . Proficiency in the use of the company’s monitoring platforms , It determines the speed of the positioning problem , Every development should use the monitoring platform skillfully ( machine 、 service 、 Interface 、DB etc. ) As a basic ability .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis When slow response occurs , Priority can be given to Redis Cluster server ( Machine load 、 Does the service have slow queries )、 Business code ( Is there a BUG)、 client ( Is the connection pool configuration reasonable ) Three aspects to investigate , Basically, we can find out most of them Redis Slow response problem .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:”Redis When the system cold starts, the connection pool , Preheating of connection pool , Different commons-pool2 Version of , The strategy of cold start is also different , But they all need to be configured minEvictableIdleTimeMillis Parameters will take effect , May have a look common-pool2 Official documents , Know the common parameters well , It can locate problems quickly .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” The default parameters of connection pool are a little weak in solving high traffic services , It needs to be optimized for large traffic scenarios , If the traffic on the business is not very large, you can directly use the default parameters .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” Specific problems should be analyzed , When you can’t solve the problem, you should change your mind , Try to solve the problem in various ways .”,”attrs”:{}}]},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null}},{“type”:”paragraph”,”attrs”:{“indent”:0,”number”:0,”align”:null,”origin”:null},”content”:[{“type”:”text”,”text”:” author :vivo Internet server team -Wang Shaodong”,”attrs”:{}}]}]}

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注