Opened on Apr 1, 2012 at 3:57:25 AM
Closed on Nov 4, 2013 at 2:03:53 PM
#2171 closed Bug (Fixed)
Inconsistent delay for TCPTimeout option
| Reported by: | ripdad | Owned by: | Jpm |
|---|---|---|---|
| Milestone: | 3.3.9.22 | Component: | AutoIt |
| Version: | 3.3.8.1 | Severity: | None |
| Keywords: | Cc: |
Description
A timeout of 1000ms and higher causes a sleep delay.
Delay time depends on TCPTimeout setting.
Examples:
Opt('TCPTimeout', 1000) = 1 second
Opt('TCPTimeout', 2000) = 2 seconds
Opt('TCPTimeout', 5000) = 5 seconds
Thanks goes to AdmiralAlkex showing timer differences.
TCPStartup()
;
Local $iTimer, $Socket, $Server = TCPListen('127.0.0.1', 80)
If $Server = -1 Then Exit
;
Opt('TCPTimeout', 999); <-- set this higher to reproduce the problem
;
Local $gui = GUICreate('TCPTimeout Test', 400, 250, -1, -1)
GUISetState(@SW_SHOW)
;
While 1
Switch GUIGetMsg()
Case -3
TCPShutdown()
GUIDelete($gui)
Exit
EndSwitch
;
$iTimer = TimerInit()
$Socket = TCPAccept($Server)
MsgBox(0, '', TimerDiff($iTimer))
;
TCPCloseSocket($Socket)
WEnd
Link: http://www.autoitscript.com/forum/topic/137646-tcptimeout-bug/
Attachments (0)
Change History (8)
comment:2 by , on Jul 30, 2012 at 8:21:28 AM
| Summary: | TCPTimeout has sleep delay after 999ms → Inconsistent delay for TCPTimeout option |
|---|
comment:3 by , on Jul 30, 2012 at 3:49:29 PM
It's not inconsistent. For whatever reason the input is in milliseconds but the actual time waited is in seconds. If the underlying API only accepts seconds then AutoIt's design is stupid and shouldn't accept milliseconds when they are going to be converted to seconds internally (with associated precision loss due to storage in an integer). If the underlying API does allow milliseconds then AutoIt's implementation is stupid for converting to seconds.
To test that this is NOT actually inconsistent but rather based on a conversion to seconds simply enter a time like 1750 and you'll see that it waits for 1 second.
comment:4 by , on Jul 30, 2012 at 10:28:55 PM
AutoIt's implementation is stupid because it miscalculates delay from user's input.
Underlying API, strictly speaking allows, microsecond precision. API's internal timer is on top of that very precise.
Miscalculation scheme is incredibly strange (stupid maybe even) that it actually made me think it was done on purpose. I even checked logs to see who wrote that part of the code, but it's too old and not logged therefore.
comment:5 by , on Jul 31, 2012 at 5:08:51 AM
It wasn't me and it wasn't Jon which pretty much explains it all, really.
comment:6 by , on Jul 21, 2013 at 11:31:57 PM
| Resolution: | → Rejected |
|---|---|
| Status: | new → closed |
comment:7 by , on Sep 7, 2013 at 10:46:20 AM
| Resolution: | Rejected |
|---|---|
| Status: | closed → reopened |
In fact a bug lead to have only entire second timeout.
subsecond is ignored
comment:8 by , on Nov 4, 2013 at 2:03:53 PM
| Milestone: | → 3.3.9.22 |
|---|---|
| Owner: | set to |
| Resolution: | → Fixed |
| Status: | reopened → closed |
Fixed by revision [9155] in version: 3.3.9.22

The code is meant to produce time delay. The fact that it does it inconsistently is a bug. For example, TCPTimeout 327 should sleep for 327 ms.