Spiff59 Posted October 6, 2011 Share Posted October 6, 2011 (edited) Maybe referencing queue architectures wasn't the best anaolgy. But this behavior is certainly more akin to FIFO than LIFO. LIFO reminds me of pushing addresses onto a stack and later popping them back off, where the data is retrieved in the reverse chronological order from which it was created. If we're considering the order in which elements appear in the array as the order of retrieval, then this is FIFO. Anyway... You are entirely correct about the count issue. I hadn't considered that the final value of the count might match an element already in the array (duh!), which is trouble. So there is no reasonable way to maintain a count within Scripting .Dictionary itself, and, as you have stated, to avoid script-breaking one would have to do it with post-processing that includes a Redim Edit: Ok, I retract my last statement, changing the last few lines to this is reasonable and doesn't technically call Redim() or _ArrayInsert(). The implicit array copy may take as long though.: $aArray = $oD.Keys $aArray[0] = $iUnique Return $aArray Edited October 6, 2011 by Spiff59 Link to comment Share on other sites More sharing options...
Beege Posted October 6, 2011 Share Posted October 6, 2011 (edited) You are entirely correct about the count issue. I hadn't considered that the final value of the count might match an element already in the array (duh!), which is trouble. So there is no reasonable way to maintain a count within Scripting .Dictionary itself, and, as you have stated, to avoid script-breaking one would have to do it with post-processing that includes a Redim What about adding an optional "return count" flag in the parameters? You could write it to do it the fastest way not worring about the count, and only if the user wants/needs the count will you then take the extra steps to redim. That will sort of seperate your fast function from a still fast function, but slower function due to avoiding script breaking changes. Great job btw EDIT: Maybe like this: Func __ArrayUnique($aArray, $iDimension = 1, $iIdx = 0, $iCase = 0, $bReturnCount = True) ;;;;;; ; code ;;;;;; If Not $bReturnCount Then Return $oD.Keys() Else Local $iCount = '<$#!UNIQUE$#!>' & $oD.Count() & '<$#!UNIQUE$#!>' $oD.Item($iCount) $aArray = $oD.Keys() Local $iLast = UBound($aArray)-1 $aArray[$iLast] = Number(StringReplace($aArray[$iLast], '<$#!UNIQUE$#!>', '')) _ArraySwap($aArray[0], $aArray[$iLast]) Return $aArray EndIf EndFunc Edited October 6, 2011 by Beege Assembly Code: fasmg . fasm . BmpSearch . Au3 Syntax Highlighter . Bounce Multithreading Example . IDispatchASMUDFs: Explorer Frame . ITaskBarList . Scrolling Line Graph . Tray Icon Bar Graph . Explorer Listview . Wiimote . WinSnap . Flicker Free Labels . iTunesPrograms: Ftp Explorer . Snipster . Network Meter . Resistance Calculator Link to comment Share on other sites More sharing options...
wraithdu Posted October 6, 2011 Author Share Posted October 6, 2011 (edited) I was thinking more along the lines of Local $aTemp = $oD.Keys() _ArrayInsert($aTemp, 0, $oD.Count) Return $aTemp I don't like the idea of 'hoping' some key value will be unique to the dictionary. And this function doesn't alter the input array, hence using a temp array. Edited October 6, 2011 by wraithdu Link to comment Share on other sites More sharing options...
wraithdu Posted October 6, 2011 Author Share Posted October 6, 2011 I updated the first post, adding an $iFlags parameter to control the counter return and made it the default behavior. On a 500000 element array, it only added 0.2 seconds (10 sec total), which is acceptable. Link to comment Share on other sites More sharing options...
Beege Posted October 6, 2011 Share Posted October 6, 2011 I don't like the idea of 'hoping' some key value will be unique to the dictionary. And this function doesn't alter the input array, hence using a temp array.I didnt either. For some reason I thought I read you were trying to not do a redim at the end, but looking over your posts you it appears you never said that. Which makes sense cause doing only one redim, as you showed,barely effects it. curious, since you pointed out how it dosent alter the input array, would that change something if it did? Assembly Code: fasmg . fasm . BmpSearch . Au3 Syntax Highlighter . Bounce Multithreading Example . IDispatchASMUDFs: Explorer Frame . ITaskBarList . Scrolling Line Graph . Tray Icon Bar Graph . Explorer Listview . Wiimote . WinSnap . Flicker Free Labels . iTunesPrograms: Ftp Explorer . Snipster . Network Meter . Resistance Calculator Link to comment Share on other sites More sharing options...
wraithdu Posted October 6, 2011 Author Share Posted October 6, 2011 The current implementation also does not alter the input array, so just keeping with what we got. Link to comment Share on other sites More sharing options...
Beege Posted October 6, 2011 Share Posted October 6, 2011 Ok i see now. Assembly Code: fasmg . fasm . BmpSearch . Au3 Syntax Highlighter . Bounce Multithreading Example . IDispatchASMUDFs: Explorer Frame . ITaskBarList . Scrolling Line Graph . Tray Icon Bar Graph . Explorer Listview . Wiimote . WinSnap . Flicker Free Labels . iTunesPrograms: Ftp Explorer . Snipster . Network Meter . Resistance Calculator Link to comment Share on other sites More sharing options...
AZJIO Posted April 23, 2012 Share Posted April 23, 2012 wraithdu source: For $i = $iIdx To UBound($aArray) - 1 If $iDims = 1 Then $vElem = $aArray[$i] Else $vElem = $aArray[$i][$iDimension] EndIf $oD.Item($vElem) Next replacement: If $iDims = 1 Then For $i = $iIdx To UBound($aArray) - 1 $vElem = $aArray[$i] $oD.Item($vElem) Next Else For $i = $iIdx To UBound($aArray) - 1 $vElem = $aArray[$i][$iDimension] $oD.Item($vElem) Next EndIf My other projects or all Link to comment Share on other sites More sharing options...
Myicq Posted June 28, 2012 Share Posted June 28, 2012 Interesting topic, this. I sometimes have similar needs (my customers ask for unique test data for lottery printing), where the data amounts can be against the 500.000 record mark. I have thought about using SQLite for the purpose, essentially something like this: create temp SQLite database in memory, with distinct restriction on a field run through input data insert in SQLite DB if error, value existed already Result is a unique list I guess. Did not try it yet though. Tried GNU tools such as AWK and uniq on a 80MB file, it was quite slow. So any thoughts on the SQLite route ? I am just a hobby programmer, and nothing great to publish right now. Link to comment Share on other sites More sharing options...
Moderators SmOke_N Posted June 28, 2012 Moderators Share Posted June 28, 2012 (edited) Interesting topic, this. I sometimes have similar needs (my customers ask for unique test data for lottery printing), where the data amounts can be against the 500.000 record mark. I have thought about using SQLite for the purpose, essentially something like this: create temp SQLite database in memory, with distinct restriction on a field run through input data insert in SQLite DB if error, value existed already Result is a unique list I guess. Did not try it yet though. Tried GNU tools such as AWK and uniq on a 80MB file, it was quite slow. So any thoughts on the SQLite route ? It works, I use it often, and it's fast. However, must be careful on how you list the data types. Edit: If there's more discussion, I'd suggest starting a new thread rather than hijacking this one. Edited June 28, 2012 by SmOke_N Common sense plays a role in the basics of understanding AutoIt... If you're lacking in that, do us all a favor, and step away from the computer. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now