tag:blogger.com,1999:blog-114124230733856093.comments2023-11-07T04:28:55.465-08:00blog.jameswebb.meUnknownnoreply@blogger.comBlogger11125tag:blogger.com,1999:blog-114124230733856093.post-23819088533078510772016-01-13T13:47:50.410-08:002016-01-13T13:47:50.410-08:00I'm a new bulk extractor user. Is there a meth...I'm a new bulk extractor user. Is there a method to get just a list of words rather than a gibberish list of everything? I'm getting so much junk that it is nearly unusuable. Lexhttps://www.blogger.com/profile/16686443967269069386noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-1467857674139983642016-01-13T13:46:24.933-08:002016-01-13T13:46:24.933-08:00Great article. I'm new to bulk extractor and g...Great article. I'm new to bulk extractor and going thru the wordlist. When I ran the list came up with more gibberish than any discernable words. Is there a way to compare the bulk extractor wordlist against the english language dictionary (or some other method) to get a list of actual words rather than gibberish? Lexhttps://www.blogger.com/profile/16686443967269069386noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-10460849972885230312015-03-02T11:16:54.237-08:002015-03-02T11:16:54.237-08:00Hi Rob ... Thanks for this great utility..
Can y...Hi Rob ... Thanks for this great utility.. <br /><br />Can you please tell me if these hash look-up queries can be made programmatically as well ? If yes, then will they be made in a similar way like we do for querying any normal database present on a server? What I feel is that it must require some library to be imported first because if we will normally query this database, then those performance speed-ups won't be there that we get when we issue commands via Terminal in Linux.. <br /><br />I hope I am clear in my question to you.. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-9221922999384766672013-12-27T14:55:28.432-08:002013-12-27T14:55:28.432-08:00I'm getting this error when I run msfupdate
ro...I'm getting this error when I run msfupdate<br />root@ip-10-242-230-83:/opt/metasploit/msf3# msfupdate<br />[*]<br />[*] Attempting to update the Metasploit Framework...<br />[*]<br /><br />[*] Checking for updates via git<br />[*] Note: Updating from bleeding edge<br />Saved working directory and index state WIP on master: 2ac02d3 Land #2802, @todb-r7's mods before release<br />HEAD is now at 2ac02d3 Land #2802, @todb-r7's mods before release<br />[*] Stashed local changes to avoid merge conflicts.<br />[*] Run `git stash pop` to reapply local changes.<br />HEAD is now at 2ac02d3 Land #2802, @todb-r7's mods before release<br />Already on 'master'<br />Already up-to-date.<br />[*] Updating gems...<br />/usr/local/bin/msfupdate:188:in `require': no such file to load -- bundler (LoadError)<br /> from /usr/local/bin/msfupdate:188:in `update_git!'<br /> from /usr/local/bin/msfupdate:137:in `run!'<br /> from /usr/local/bin/msfupdate:135:in `chdir'<br /> from /usr/local/bin/msfupdate:135:in `run!'<br /> from /usr/local/bin/msfupdate:292<br />root@ip-10-242-230-83:/opt/metasploit/msf3# <br />macubergeekhttps://www.blogger.com/profile/15027282445598248357noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-62732990281745229542013-09-09T04:23:00.188-07:002013-09-09T04:23:00.188-07:00This comment has been removed by a blog administrator.cybersecurityinchttps://www.blogger.com/profile/00519865777521960579noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-79572012674297824902013-08-26T15:29:31.199-07:002013-08-26T15:29:31.199-07:00Thanks for the article, I'm still having some ...Thanks for the article, I'm still having some issues, but working through them. Something you might want to update:<br /><br />Instead of: sudo svn co https://www.metasploit.com/svn/framework3/trunk/ /opt/metasploit/msf3/<br /><br />Use: git clone https://github.com/rapid7/metasploit-framework.git<br /><br />Using the first one gives you problems.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-33467085957379749062013-05-16T13:41:58.957-07:002013-05-16T13:41:58.957-07:00Making forensics interesting. Well done sir.Making forensics interesting. Well done sir.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-57924659074557003232013-05-08T15:02:35.279-07:002013-05-08T15:02:35.279-07:00Unfortunately, CDB, SQLite, BDB and the like reall...Unfortunately, CDB, SQLite, BDB and the like really aren't in the cards for a lot of reasons.<br /><br />1. No user demand for them. Requiring an 8Gb server and a 64-bit OS isn't all that unreasonable. My desktop development machine has 32Gb, for instance.<br /><br />2. Third-party components make it harder for users to fully vet the nsrlsvr/nsrllookup system. As it currently stands they're each about 1000 lines of C++ and can be fully audited over a weekend.<br /><br />3. Performance. Assuming you're checking a million hashes, you have to send 32Mb of hash data to the server, the server has to process it, and the server then sends 1Mb of lookup data back. With gigabit ethernet over a LAN the transmission time is effectively nil. If each hash lookup is a millisecond and requires a disk access, that means in the best-case scenario you're looking at 15 minutes for nsrlsvr to complete the lookups and get the data back to you. In the worst-case scenario someone's already doing a large lookup and you've got disk I/O contention going on. If the entire structure is kept in memory lookup times are sub-microsecond, which means the million hashes get processed in about 1 second. Further, there is no worst-case scenario since a C++ std::set is safe for simultaneous reads from multiple threads -- there's no analogue to disk I/O contention.<br /><br />If someone comes to me with a serious need for CDB functionality I'll consider introducing it, but for right now I think it would be a premature optimization that would harm performance and not give us very much. But I certainly agree that CDB, SQLite, etc., are neat tools and there are tons of areas where they can be used productively.Robhttps://www.blogger.com/profile/00161511448938342839noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-81010439740340788082013-05-08T11:14:33.429-07:002013-05-08T11:14:33.429-07:00Thanks for your comments. I'm a fan of DJB'...Thanks for your comments. I'm a fan of DJB's code, but haven't come across CDB before. I'll check it out. From your experiences, it sounds very impressive performance wise. <br /><br />Also, thanks for pointer to Hashsets.com (also new to me). A good listing of free and commercially available hash-sets would be really useful. <br /><br />It appears PassMark also provides some hash-sets that might have utility (Common Keyloggers is nice):<br />http://www.osforensics.com/download.html<br />Jim Webbhttps://www.blogger.com/profile/07663922507566460997noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-85880278652444237692013-05-08T07:06:25.088-07:002013-05-08T07:06:25.088-07:00Have you explored using a single file databases su...Have you explored using a single file databases such as CDB (http://en.wikipedia.org/wiki/Cdb_(software) or http://cr.yp.to/cdb.html)? It is open source and it will remove the memory requirements and limitations you are hitting and still provide extremely fast lookups. In addition, if you build your CDB database using md5 hashes as keys, you can cut the size of the md5 sum in half from a 32 byte hex string to a 16 byte integer. <br /><br />I have worked with a project where we had 35 million known good hashes in a 1.4GB CDB file. We achieved query times in the milliseconds range on a lower end laptop with only 2GB of RAM. Because of the way CDB works you do not have to load the hash data into a glob in RAM.<br /><br />Another useful source for hashes is Hashsets.com, they have an extensive list of what you may consider known good to use in addition to the NSRL.Beau Moorehttps://www.blogger.com/profile/09327195051799939059noreply@blogger.comtag:blogger.com,1999:blog-114124230733856093.post-13418967441900123172013-05-07T16:44:15.383-07:002013-05-07T16:44:15.383-07:00Hi -- this is Rob, the guy behind nsrlsvr and nsrl...Hi -- this is Rob, the guy behind nsrlsvr and nsrllookup. First, thanks a lot for this review. It's always nice to see people are finding it useful!<br /><br />To give a couple of technical details about how nsrlsvr works -- it reads in 30 million hashes from disk as ASCII strings and stores them in balanced tree for rapid lookups. This means first off that it requires a minimum of a few hundred megabytes of memory to run, but it will be scattered all throughout the heap. On 4Gb systems this can create severe memory fragmentation issues. It'll run, just ... less well than I'd like. Give it 8Gb and it should hum quite prettily.<br /><br />What you get for this in-memory structure, though, is *volume*. You'll saturate your network connection long before the server stops being responsive. Since it's entirely memory-resident it's quite snappy. You can set up a single nsrlsvr instance and have it used throughout your entire organization.<br /><br />This makes it possible to give one person the "nsrladmin hat," and make them responsible for updating hash values in a timely fashion. That's far better than a dozen investigators all setting up their own nsrlsvr instances and six months later no two are using the exact same dataset.<br /><br />Anyway -- thank you again for the review, and I hope it continues to be useful to you. If you have any questions, please feel free to drop a note my way. :)Robhttps://www.blogger.com/profile/00161511448938342839noreply@blogger.com