Far From You By Lisa Schroeder Pdf Merge
Like most Emacs users, I depend on to install my packages and keep them up to date. I have a couple of packages from the other repositories but the majority of them come from Melpa.
It’s easy to get grumpy when something goes wrong: there’s a build error, there’s a missing file, a package takes a while to get into Melpa, and so on. The truth though, is that Melpa is a volunteer operation run. Everything that happens on Melpa happens because Purcell or one of his helpers spent their time doing it for free.
I was reminded of that by this thread on Twitter: Thanks for all your hard work and in particular your choice to spend your own time on this. You are a real credit to the community and it would be significantly poorer without your constant gardening. I hope you get the chance to enjoy a well earned break. — Alex Murray (@alex_murray) If you click on the tweet, you’ll see that it started with Purcell saying he was 18 days behind in dealing with pull requests.
To deal with that, Purcell spent 3 and a half hours on a holiday weekend to clear the backlog. As Murray says, we all owe Purcell thanks for a job well done and for donating his time for this unpaid but very important work. Purcell lives in New Zealand so most of us are unlikely to run into him but if you do, be sure to give him your thanks and buy him a beer.
Traditional fiction writers and writing circles have experimented in creating group stories, such as Robert Asprin's Thieves World and MythAdventures – such approaches date back at least as far as The Floating Admiral in 1931. There are many highly regarded collaborations, but also some collaborative work produced as. Type the text you hear or see. More options.
It’s the least we can do. -1:-- (Post jcs)--L0--C0--December 23, 2017 06:54 PM. This is a post for Elisp programmers who have some Common Lisp (CL) experience.
One nice feature of CL is the flet macro that lets you temporarily bind a function lexically. Emacs Lisp also has a flet in the cl library but it’s slightly different in that the binding is dynamic. If you’re not clear on what that means has a of the difference and a practical example of why the difference matters. In Emacs 24.3 the cl-lib library was introduced to replace the cl library.
Mostly that amounted to renaming the functions to have a ‘cl-’ prefix but cl-flet differs from flet in that the binding is lexical as in Common Lisp. In general, that’s a good thing because it makes Elisp consistent with CL as far as flet is concerned and is almost always the desired behavior. It turns out, though, that the old behavior was useful in certain circumstances such as stubbing out functions during testing. Happily, there’s an easy solution. Both Wellons and explain the problem and the solution.
Malabarba’s post is short and to the point while Wellons explains things in more detail. If you’re a serious Elisp programmer you should definitely read both posts. There’s a lot of confusion about flet and cl-flet and the two posts linked above do an excellent job of clearing things up. Update [2017-12-22 Fri 13:36]: flex → flet; cl-flex → cl-flet -1:-- (Post jcs)--L0--C0--December 21, 2017 11:56 PM.
I manually categorize Emacs News links into an Org unordered list, and then I reorganize the list by using M-S-up (org-shiftmetaup) and M-S-down (org-shiftmetadown). I decide to combine or split categories depending on the number of links. I have a pretty consistent order. John Wiegley suggested promoting Emacs Lisp and Emacs development links at the top of the list. I like to sort the rest of the list roughly by interest: general links first, then Org, then coding, then other links at the bottom. Here’s some code that sorts Org lists in a custom sequence, with unknown items at the bottom for easy re-ordering.
It will take a list like: - Other: - Link A - Link B - Emacs development: - Link A - Link B - Emacs Lisp: - Link A - Link B and turn it into: - Emacs Lisp: - Link A - Link B - Emacs development: - Link A - Link B - Other: - Link A - Link B. ( defun my/org-sort-list-in-custom-order (order) 'Sort the current Org list so that items are in the specified order. ORDER is a list of regexps.' (org-sort-list nil?f ( lambda () ( let ((case-fold-search t) (item ( when (looking-at '[ t]*[-+*0-9.)]+ ( [ t]+ [[- X] ] )?[ t]+') (org-sort-remove-invisible (buffer-substring (match-end 0) (point-at-eol)))))) ( or (cl-position item order:test ( lambda (a b) (string-match b a))) (1+ (length order))))) '. First things first: the title is a lie. If you happen to be one of my passionate readers, you may recall I started on April 1.
So yes, not every month of the year has been devoted to functional programming. I just needed something bold to pull you in, sorry. Now, how does it feel having worked with Clojure for almost a year? Here at we had our fair share of projects.
The open source ones are just a selected few:, a driver for OrientDB binary protocol;, an SPA to show how our driver works;, a little tool to interact with MySQL databases via REST APIs. I also had the chance to play with ArangoDB recently, and there were no problems building a sample project to understand its APIs. At home, was born to strengthen my ever-growing knowledge and do something useful for the family.
When I started in the new office, the switch from professional Java to professional Clojure was a bit overwhelming. New libraries, new tools, new patterns, new ways of solving the same old problems, new problems to approach with a totally different mindset. It all seemed too much. Then, something clicked. Having the same language on both client- and server-side helped me figure out the matters at hand with a set of ideas I could easily reuse.
Once I understood the problem, I could look for the steps to solve it. Each step required a data structure and the function to handle this data structure. The first time I used reduce-kv because it was the most natural choice left a great smile on my face. There is still much to learn, though. Due to lack of experience with JavaScript, my ClojureScript-fu needs to improve. I have come to appreciate unit testing, but it’s time to put this love at work on my.cljs files too. I also definitely want to know more about Clojure web applications security and performances.
2017 has been a great year to be a functional programmer. My recent is directing me more and more on my way.
The functional programming way. -1:-- (Post)--L0--C0--December 21, 2017 12:00 AM. *- lexical-binding: nil; -*- ( lambda ( x ) x );; =>#[(x) ' 010 207' [x] 1] ' ( lambda ( x ) x );; =>(lambda (x) x) The #[.] is the syntax for a byte-code function object. As discussed in detail in, it’s a special vector object that contains byte-code, and other metadata, for evaluation by Emacs’ virtual stack machine.
Elisp is one of very few languages with, and this feature is core to its ahead-of-time byte compilation. The quote, by definition, prevents evaluation, and so inhibits byte compilation of the lambda expression. It’s vital that the byte compiler does not try to guess the programmer’s intent and compile the expression anyway, since that would interfere with lists that just so happen to look like lambda expressions — i.e.
Any list containing the lambda symbol. There are three reasons you want your lambda expressions to get byte compiled: • Byte-compiled functions are significantly faster. That’s the main purpose for byte compilation after all. • The compiler performs static checks, producing warnings and errors ahead of time.
This lets you spot certain classes of problems before they occur. The static analysis is even better under lexical scope due to its tighter semantics. • Under lexical scope, byte-compiled closures may use less memory.
More specifically, they won’t accidentally keep objects alive longer than necessary. I’ve never seen a name for this implementation issue, but I call it overcapturing. More on this later. While it’s common for personal configurations to skip byte compilation, Elisp should still generally be written as if it were going to be byte compiled. General rule of thumb: Ensure your lambda expressions are actually evaluated. Lambda in lexical scope As I’ve stressed many times,. There’s no practical disadvantage or trade-off involved.
Once lexical scope is enabled, the two expressions diverge even without byte compilation. *- lexical-binding: t; -*- ( lambda ( x ) x );; =>(closure (t) (x) x) ' ( lambda ( x ) x );; =>(lambda (x) x) Under lexical scope, lambda expressions evaluate to closures. Closures capture their lexical environment in their closure object — nothing in this particular case. It’s a type of function object, making it a valid first argument to funcall. Since the quote prevents the second expression from being evaluated, semantically it evaluates to a list that just so happens to look like a (non-closure) function object. Invoking a data object as a function is like using eval — i.e. Executing data as code.
Everyone already knows eval should not be used lightly. It’s a little more interesting to look at a closure that actually captures a variable, so here’s a definition for constantly, a higher-order function that returns a closure that accepts any number of arguments and returns a particular constant. ( constantly:foo );; =>(closure ((x.:foo) t) (&rest _) x) The environment has been captured as an association list (with a trailing t), and we can plainly see that the variable x is bound to the symbol:foo in this closure. Consider that we could manipulate this data structure (e.g. Setcdr or setf) to change the binding of x for this closure. This is essentially how closures mutate their own environment. Moreover, closures from the same environment share structure, so such mutations are also shared.
More on this later. Semantically, closures are distinct objects (via eq), even if the variables they close over are bound to the same value. This is because they each have a distinct environment attached to them, even if in some invisible way. ( defun have-i-been-compiled-p () ( let (( funcs ( vector nil nil ))) ( dotimes ( i 2 ) ( setf ( aref funcs i ) ( lambda ()))) ( eq ( aref funcs 0 ) ( aref funcs 1 )))) ( have-i-been-compiled-p );; =>nil ( byte-compile 'have-i-been-compiled-p ) ( have-i-been-compiled-p );; =>t The trick here is to evaluate the exact same non-capturing lambda expression twice, which requires a loop (or at least some sort of branch). Semantically we should think of these closures as being distinct objects, but, if we squint our eyes a bit, we can see the effects of the behind-the-scenes optimization. Don’t actually do this in practice, of course. That’s what byte-code-function-p is for, which won’t rely on a subtle implementation detail.
Overcapturing I mentioned before that one of the potential gotchas of not byte compiling your lambda expressions is overcapturing closure variables in the interpreter. To evaluate lisp code, Emacs has both an interpreter and a virtual machine. The interpreter evaluates code in list form: cons cells, numbers, symbols, etc. The byte compiler is like the interpreter, but instead of directly executing those forms, it emits byte-code that, when evaluated by the virtual machine, produces identical visible results to the interpreter — in theory.
What this means is that Emacs contains two different implementations of Emacs Lisp, one in the interpreter and one in the byte compiler. The Emacs developers have been maintaining and expanding these implementations side-by-side for decades.
A pitfall to this approach is that the implementations can, and do, diverge in their behavior. We saw this above with that introspective function, and it. Another way they diverge is in closure variable capture. *- lexical-binding: t; -*- ( defun overcapture ( x y ) ( when y ( lambda () x ))) ( overcapture:x:some-big-value );; =>(closure ((y.:some-big-value) (x.:x) t) nil x) Notice that the closure captured y even though it’s unnecessary. This is because the interpreter doesn’t, and shouldn’t, take the time to analyze the body of the lambda to determine which variables should be captured. That would need to happen at run-time each time the lambda is evaluated, which would make the interpreter much slower. Overcapturing can get pretty messy if macros are introducing their own hidden variables.
On the other hand, the byte compiler can do this analysis just once at compile-time. And it’s already doing the analysis as part of its job. It can avoid this problem easily. ( overcapture:x:some-big-value );; =>#[0 ' 300 207' [:x] 1] It’s clear that:some-big-value isn’t present in the closure.
But how does this work? How byte compiled closures are constructed Recall from the that the four core elements of a byte-code function object are: • Parameter specification • Byte-code string (opcodes) • Constants vector • Maximum stack usage While a closure seems like compiling a whole new function each time the lambda expression is evaluated, there’s actually not that much to it! Only the closed-over environment changes. What this means is that closures produced by a common lambda expression can all share the same byte-code string (second element). Their bodies are identical, so they compile to the same byte-code. Where they differ are in their constants vector (third element), which gets filled out according to the closed over environment.
It’s clear just from examining the outputs. 0 constant make-byte-code 1 constant 128 2 constant ' 300 207' 4 constant vector 5 stack-ref 4 6 call 1 7 constant 2 8 call 4 9 return (Note: since byte compiler doesn’t produce perfectly optimal code, I’ve simplified it for this discussion.) It pushes most of its constants on the stack. Then the stack-ref 5 (5) puts x on the stack.
Then it calls vector to create the constants vector (6). Finally, it constructs the function object ( #[.]) by calling make-byte-code (8).
Since this might be clearer, here’s the same thing expressed back in terms of Elisp. ( defun adder () ( make-byte-code 0 ' 300 211 242T 240 207' ( vector ( list 0 )) 2 )) In theory, this closure could operate by mutating its constants vector directly. But that wouldn’t be much of a constants vector, now would it!? Instead, mutated variables are boxed inside a cons cell.
Closures don’t share constant vectors, so the main reason for boxing is to share variables between closures from the same environment. That is, they have the same cons in each of their constant vectors. There’s no equivalent Elisp for the closure in adder, so here’s the disassembly. New: in-memory-diff takes two source buffers and treats their content as a set of unordered lines, as one would expect for a file like ~/.authinfo.gpg, for example. We don’t use diff(1) to diff the buffers and thus we don’t write temporary files to disk. The result is two buffers, *A* and *B*. Each contains the lines the other buffer does not contain.
These files are in a major mode with the following interesting keys bindings: • c – copy the current line to the other source buffer • k - kill the current line from this source buffer • RET - visit the current line in this source buffer In theory, using c and k on all the lines should result in the two source containing the same lines a subsequent call of in-memory-diff showing two empty buffers. On December 7, Patreon made an announcement about the change in their transaction fee structure. The results as of December 10 speak for themselves: December 2017 summary: -$29 in pledges, -6 patrons All leaving patrons marked 'I'm not happy with Patreon's features or services.'
As the reason for leaving, with quotes ranging from: The billing changes are not great. Meaning In Architecture Charles Jencks Pdf. To: Patreon's new fees are unacceptable In this article, I will explore the currently available methods for supporting sustainable Free Software development and compare their transaction fees. My experience My experience taking donations is very short. I announced my on in October 2017. Here's what I collected so far, vs the actual money spent by the contributors: • 2017-11-01: $140.42 / $162.50 = 86.41% • 2017-12-01: $163.05 / $187.50 = 86.96% The numbers here are using the old Patreon rules that are going away this month.
Real numbers method formula charged donated fee old Patreon??? $1.00 $0.86 14% new Patreon 7.9% + $0.35 $1.38 $0.95 31% $2.41 $1.90 21% $5.50 $4.75 14% OpenCollective 12.9% + $0.30 $1.33 $0.90 32% $2.36 $1.80 24% $5.45 $4.50 18% Flattr 16.5% $1.00 $0.84 17% $2.00 $1.67 17% $5.00 $4.18 17% Liberapay 0.585% $1.00 $0.99 1% On Patreon Just like everyone else, I'm not happy with the incoming change to the Patreon fees. But even after the change, it's still a better deal than OpenCollective, which is used quite successfully e.g. Just to restate the numbers in the table, if all backers give $1 (which is the majority currently, and I actually would generally prefer 5 new $1 backers over 1 new $5 backer), with the old system I get $0.86, while with the new system it's $0.69.
That's more than 100% increase in transaction fees. On OpenCollective It's more expensive than the new Patreon fees in every category or scenario. On Flattr Flattr is in the same bucket as Patreon, except with slightly lower fees currently. Their default plan sounds absolutely ridiculous to me: you install a browser plug-in so that a for-profit corporation can track which websites you visit most often in order to distribute the payments you give them among those websites. If it were a completely local tool which doesn't upload any data on the internet and instead gives you a monthly report to adjust your donations, it would have been a good enough tool. Maybe with some adjustments for mind-share bubbles, which result in prominent projects getting more rewards than they can handle, while small projects fade away into obscurity without getting a chance.
But right now it's completely crazy. Still, if you don't install the plug-in, you can probably still use Flattr and it will work similarly to Patreon. I made an, just in case, but I wouldn't recommend going to Flattr unless you're already there, or the first impression it made on me is wrong.
On Paypal Paypal is OK in a way, since a lot of the time the organizations like Patreon are just middle men on top of Paypal. On the other hand, there's no way to set up recurring donations. And it's harder for me to plan decisions regarding my livelihood if I don't know at least approximately the sum I'll be getting next month. My account, in case you want to make a lump sum donation:. On Bitcoin Bitcoin is similar to Paypal, except it also: • has a very bad impact on the environment, • is a speculative bubble that supports either earning or losing money without actually providing value to the society. I prefer to stay away from Bitcoin. Summary sounds almost too good to be true.
At the same time, their fees are very realistic, you could almost say optimal, since there are no fees for transfers between members. So you can spend either €20.64 (via card) or €20.12 (via bank wire) to charge €20 into your account and give me €1 per month at no further cost. If you change your mind after one month, you can withdraw your remaining €19 for free if you use a SEPA (Single Euro Payments Area) bank.
If I set out today to set up a service similar to Liberapay, even with my best intentions and the most optimistic expectations, I don't see how a better offer could be made. I recommend anyone who wants to support me to try it out. And, of course, I will report back with real numbers if anything comes out of it. Thanks to all my patrons for their former and ongoing support.
At one point we were at 30% of the monthly goal (25% atm.). This made me very excited and optimistic about the future. Although I'm doing Free Software for almost 5 years now, it's actually 3 years in academia and 2 years in industry. Right now, I'm feeling a burnout looming over the horizon, and I was really hoping to avoid it by spending less time working at for-profit corporations. Any help, either monetary or advice is appreciated.
If you're a part of a Software Engineering or a Research collective that makes you feel inspired instead of exhausted in the evening and you have open positions in EU or on remote, have a look at my - maybe we could become colleagues in the future. I'll accept connections from anyone - if you're reading this blog, we probably have a lot in common; and it's always better together. -1:-- (Post)--L0--C0--December 09, 2017 11:00 PM.
Since my first baby steps in the world of Functional Programming, Haskell has been there. Like the enchanting music of a Siren, it has been luring me with promises of a new set of skills and a better understanding of the lambda calculus.
I refused to oblige at first. A bit of Scheme and occupied my mind and my daily activities. Truth be told, the odious warfare between dynamic types troopers and static types zealots didn’t help steering my enthusiasm towards Haskell. Still, my curiosity is stoic and hard to kill and the Haskell Siren was becoming too tempting to resist any further. In me knew it was the right thing to do. My knowledge portfolio is always reaching out for something new. My journey began with the much praised.
Of the exercises only to soon discover this wasn’t the right book for me. A bit too terse and schematic, I needed something that could ease me in in a different way. I needed more focus on the basics, the roots of the language.
As I usually do, I sought help online. I don’t know many Haskell developers, but I know there are crazy guys in the Emacs community. Was kind and patient enough to introduce me to. This is a huge book (nearly 1300 pages), but it just took the authors’ prefaces to hook me. Julie Moronuki words in particular resonated heavily with me. Unlike Julie I have experience in programming, but I felt exactly like her when it comes to approaching Haskell teaching materials.
So here I am, armed with and and ready to abandon myself to the depths and wonders of static typing and pure functional programming. I will track my progress and maybe report back here. I already have a project in mind, but my Haskell needs to get really good before starting any serious work. May the lambda be with me. -1:-- (Post)--L0--C0--December 08, 2017 12:00 AM.
Sometimes when I plan to read a longish html text, I fire up, a small web browser that comes with Emacs. However, reading pages on larger monitor doesn't provide good experience, at least not for me. Here is an example: Let's fix that with some elisp code: (defun eww-more-readable () 'Makes eww more pleasant to use. Run it after eww buffer is loaded.' (interactive) (setq eww-header-line-format nil);; removes page title (setq mode-line-format nil);; removes mode-line (set-window-margins (get-buffer-window) 20 20);; increases size of margins (redraw-display);; apply mode-line changes (eww-reload 'local));; apply eww-header changes EWW already comes with function, so I named it eww-more-readable. Evaluate it and call with: M-x eww-more-readable Result is much better now: EDIT: Chunyang Xu noticed that elisp code had balanced parentheses issue and also suggested to use (eww-reload 'local) to avoid re-fetching the page.
-1:-- (Post)--L0--C0--November 30, 2017 11:00 PM. I use but inevitably I have multiple items without a specific deadline or scheduled date and that have the same priority. These appear in my agenda in the order in which they were added to my to-do list, but I’ll sometimes want to change that order. This can be done temporarily using M-UP or M-DOWN in the agenda view, but these changes are lost when the agenda is refreshed. I came up with a two-part solution to this. The main part is a generic function to move the subtree at the current point to be the top item of all subtrees of the same level.
Here is the function. Is a new Emacs package I’ve been working on lately. It’s a personal finances and budgeting package for Emacs that uses for scraping data from bank websites.
I started building Elbank after using for several years. While Ledger is a real gem, I didn’t want to spend time doing bookkeeping anymore.
Instead, I wanted a simple reporting tool that would automatically scrap data and build reports within Emacs from it. Setting up Weboob To use Elbank, you will first have to. Weboob is a collection of applications used to interact with websites from the command-line. Elbank uses the banking application named boobank to scrap data. The list of currently supported bank websites is available.
Fortunately, installing Weboob should be a breeze as there are for most GNU/Linux distros, and an for Mac users. Once Weboob is installed, run boobank in a console to setup your accounts. Installing Elbank You can now install elbank from by running M-x package-install RET elbank RET, and voila! Using Elbank The overview buffer Run M-x elbank-overview to get started. The overview buffer lists all accounts as custom reports and budgets.
Press u to import the bank statements from your bank website. You can click on each account or report displayed in the buffer to open them. Categorizing transactions Transaction categories is an important aspect of Elbank. Categories make it possible to filter and budget. Transactions are automatically categorized when reporting, using the custom variable elbank-categories. Here’s an example value for elbank-categories, you should adjust it based on your own transactions and categorizing needs.
( setq elbank-categories ' (( 'Expenses:Food'. ( '^supermarket' '^restaurant' 'Local store XXX' 'Bakery XXX' )) ( 'Expenses:Rent'. ( 'Real Estate Agency XXX' )) ( 'Income:Salary'. ( 'Bank transfer from Company XXX' )))) Each transaction’s text is matched against the regular expressions of elbank-categories, the first match defines the category of a transaction. Reports Evaluate M-x elbank-report to create a new report.
The command will ask you for an account, period and category, which are all optional. Here’s the list of keybindings available in a report buffer: • f c: Filter the transactions by category • f a: Only show transactions in a specified account • f p: Select the period of the report • G: Group transactions by some property • S: Sort transactions • s: Reverse the sort order • M-p: Move backward by one period (month or year) • M-n: Move forward by one period (month or year) You can also customize the variable elbank-saved-monthly-reports and elbank-saved-yearly-reports to conveniently get a quick list of commonly used reports from the overview buffer. Budgeting The custom variable elbank-budget is used to define a monthy budget.
It defines how much money we want to spend by category of transaction, like 'Food' or 'Rent'. ( setq elbank-budget ' (( 'Expenses:Food'. 300 ) ( 'Expenses:Rent'.
450 ) ( 'Expenses:Transport'. 120 ) ( 'Expenses:Utilities'.
145 ))) Note that budgeted amounts are positive numbers while expenses have negative values. Press b from the overview buffer or evaluate M-x elbank-budget-report to see your expenses based on your budget. You can switch periods with M-p and M-n the same way as in report buffers.
Conclusion That’s all for now! Elbank is still in its infancy, but I’m already using it daily. If you find any bug or would like to suggest improvements, feel free to open a ticket on the.
Intro is a completion method that's similar to Ido, but with emphasis on simplicity and customizability. Overview The current release constitutes of 280 commits and 8 months of progress since 0.9.0.
Many issues ranging from to were fixed. The number of people who contributed code as grown to; thanks, everyone! Details on changes Changelog.org has been a part of the repository since 0.6.0, you can get the details of the current and past changes: • in • in Highlights Many improvements are incremental and don't require any extra code to enable. I'll go over a few selected features that require a bit of information to make a good use of them. Selectable prompt Off by default. You can turn it on like so. ( setq ivy-use-selectable-prompt t ) After this, your current input becomes selectable as a candidate.
Press C-p when you're on the first candidate to select your input instead. This solves the long standing issue of e.g. Creating a file or a directory foo when a file foobar already exists.
Previously, the only solution was to use C-M-j. It's still available, but now you can also select your input with C-p and press RET. New global actions for ivy ivy-set-actions was used to enable the following bindings: • Press M-o w to copy the current candidate to the kill ring. • Press M-o i to insert the current candidate into the buffer. These bindings are valid for any completion session by default. Use C-d in ivy-occur buffers Here's an example use-case: search your source code for a variable name with e.g.
Counsel-rg and call ivy-occur ( C-c C-o). Suppose you get 10 results, only 4 of which are interesting. You can now delete the uninteresting ones with C-d.
Then maybe check off the others with C-d as well as you complete them one by one. A sort of a TODO list. Similarly, if you want to go over variables to customize, you can call counsel-describe-variable with input ^counsel-[^-] and then check off the ones you have already examined with C-d. Defcustoms to play with Here's the list of new defcustom or defvar that might be interesting to review: • counsel-async-filter-update-time • counsel-async-ignore-re • counsel-describe-function-function • counsel-describe-function-preselect • counsel-find-file-ignore-regexp • counsel-fzf-dir-function • counsel-git-grep-skip-counting-lines • counsel-git-log-split-string-re • counsel-url-expansions • ivy-auto-select-single-candidate • ivy-magic-slash-non-match-action • ivy--preferred-re-builders • ivy-truncate-lines New Commands 14 new commands were added by me and many contributors. Emacspeak is a fully functional audio desktop that provides complete eyes-free access to all major 32 and 64 bit operating environments.
By seamlessly blending live access to all aspects of the Internet such as ubiquitous assistance, Web-surfing, blogging, social computing and electronic messaging into the audio desktop, Emacspeak enables speech access to local and remote information with a consistent and well-integrated user interface. A rich suite of task-oriented tools provides efficient speech-enabled access to the evolving assistant-oriented social Internet cloud. This version requires emacs-25.1 or later. • speech-Enable Extensible EVIL — VI Layer: ⸎ • Bookshare — Support Additional downloads (epub3,mp3): 🕮 • Bookmark support for EBooks in EWW 📔 • Speech-Enable VDiff — A Diff tool: ≏ • Speech-enable Package shx —Shell Extras For Emacs: 🖁 • Updated IDO Support: ⨼ • Implemented NOAA Weather API: ☔ • Speech-Enable Typographic Editting Support: 🖶 • Speech-Enable Package Origami: 🗀 • Magit Enhancements for Magitians: 🎛 • Speech-Enable RipGrep Front-End: ┅ • Added SmartParen Support: 〙 • Speech-enabled Minesweeper game: 🤯 • And a lot more than wil fit this margin. Never a toy system, Emacspeak is voluntarily bundled with all major Linux distributions. Though designed to be modular, distributors have freely chosen to bundle the fully integrated system without any undue pressure—a documented success for the integrated innovation embodied by Emacspeak. As the system evolves, both upgrades and downgrades continue to be available at the same zero-cost to all users.
The integrity of the Emacspeak codebase is ensured by the reliable and secure Linux platform used to develop and distribute the software. Extensive studies have shown that thanks to these features, users consider Emacspeak to be absolutely priceless.
Thanks to this wide-spread user demand, the present version remains priceless as ever—it is being made available at the same zero-cost as previous releases. At the same time, Emacspeak continues to innovate in the area of eyes-free Assistance and social interaction and carries forward the well-established Open Source tradition of introducing user interface features that eventually show up in luser environments. On this theme, when once challenged by a proponent of a crash-prone but well-marketed mousetrap with the assertion 'Emacs is a system from the 70's', the creator of Emacspeak evinced surprise at the unusual candor manifest in the assertion that it would take popular idiot-proven interfaces until the year 2070 to catch up to where the Emacspeak audio desktop is today. Industry experts welcomed this refreshing breath of Courage Certainty and Clarity (CCC) at a time when users are reeling from the Fear Uncertainty and Doubt (FUD) unleashed by complex software systems backed by even more convoluted press releases. Independent test results have proven that unlike some modern (and not so modern) software, Emacspeak can be safely uninstalled without adversely affecting the continued performance of the computer.
These same tests also revealed that once uninstalled, the user stopped functioning altogether. Speaking with Aster Labrador, the creator of Emacspeak once pointed out that these results re-emphasize the user-centric design of Emacspeak; 'It is the user –and not the computer– that stops functioning when Emacspeak is uninstalled!' • Emacspeak 47.0 (GentleDog) goes the next step in being helpful while letting users learn and grow. • Emacspeak 46.0 (HelpfulDog) heralds the coming of Smart Assistants.
• Emacspeak 45.0 (IdealDog) is named in recognition of Emacs' excellent integration with various programming language environments — thanks to this, Emacspeak is the IDE of choice for eyes-free software engineering. • Emacspeak 44.0 continues the steady pace of innovation on the audio desktop.
• Emacspeak 43.0 brings even more end-user efficiency by leveraging the ability to spatially place multiple audio streams to provide timely auditory feedback. • Emacspeak 42.0 while moving to GitHub from Google Code continues to innovate in the areas of auditory user interfaces and efficient, light-weight Internet access. • Emacspeak 41.0 continues to improve on the desire to provide not just equal, but superior access — technology when correctly implemented can significantly enhance the human ability.
• Emacspeak 40.0 goes back to Web basics by enabling to large amounts of readable Web content. • Emacspeak 39.0 continues the Emacspeak tradition of increasing the breadth of user tasks that are covered without introducing unnecessary bloatware. • Emacspeak 38.0 is the latest in a series of award-winning releases from Emacspeak Inc.
• Emacspeak 37.0 continues the tradition of delivering robust software as reflected by its code-name. • Emacspeak 36.0 enhances the audio desktop with many new tools including full EPub support — hence the name EPubDog. • Emacspeak 35.0 is all about teaching a new dog old tricks — and is aptly code-named HeadDog in on of our new Press/Analyst contact. Emacspeak-34.0 (AKA Bubbles) established a new beach-head with respect to rapid task completion in an eyes-free environment. • Emacspeak-33.0 AKA StarDog brings unparalleled cloud access to the audio desktop. • Emacspeak 32.0 AKA LuckyDog continues to innovate via open technologies for better access. • Emacspeak 31.0 AKA TweetDog — adds tweeting to the Emacspeak desktop.
• Emacspeak 30.0 AKA SocialDog brings the Social Web to the audio desktop—you cant but be social if you speak! • Emacspeak 29.0—AKAAbleDog—is a testament to the resilliance and innovation embodied by Open Source software—it would not exist without the thriving Emacs community that continues to ensure that Emacs remains one of the premier user environments despite perhaps also being one of the oldest. • Emacspeak 28.0—AKA PuppyDog—exemplifies the rapid pace of development evinced by Open Source software.
• Emacspeak 27.0—AKA FastDog—is the latest in a sequence of upgrades that make previous releases obsolete and downgrades unnecessary. • Emacspeak 26—AKA LeadDog—continues the tradition of introducing innovative access solutions that are unfettered by the constraints inherent in traditional adaptive technologies. • Emacspeak 25 —AKA ActiveDog —re-activates open, unfettered access to online information. • Emacspeak-Alive —AKA LiveDog —enlivens open, unfettered information access with a series of live updates that once again demonstrate the power and agility of open source software development. • Emacspeak 23.0 — AKA Retriever—went the extra mile in fetching full access. • Emacspeak 22.0 —AKA GuideDog —helps users navigate the Web more effectively than ever before.
• Emacspeak 21.0 —AKA PlayDog —continued the Emacspeak tradition of relying on enhanced productivity to liberate users. • Emacspeak-20.0 —AKA LeapDog —continues the long established GNU/Emacs tradition of integrated innovation to create a pleasurable computing environment for eyes-free interaction. • emacspeak-19.0 –AKA WorkDog– is designed to enhance user productivity at work and leisure. • Emacspeak-18.0 –code named GoodDog– continued the Emacspeak tradition of enhancing user productivity and thereby reducing total cost of ownership. • Emacspeak-17.0 –code named HappyDog– enhances user productivity by exploiting today's evolving WWW standards. • Emacspeak-16.0 –code named CleverDog– the follow-up to SmartDog– continued the tradition of working better, faster, smarter.
• Emacspeak-15.0 –code named SmartDog–followed up on TopDog as the next in a continuing series of award-winning audio desktop releases from Emacspeak Inc. • Emacspeak-14.0 –code named TopDog–was the first release of this millennium.
• Emacspeak-13.0 –codenamed YellowLab– was the closing release of the 20th. • Emacspeak-12.0 –code named GoldenDog– began leveraging the evolving semantic WWW to provide task-oriented speech access to Webformation.
• Emacspeak-11.0 –code named Aster– went the final step in making Linux a zero-cost Internet access solution for blind and visually impaired users. • Emacspeak-10.0 –(AKA Emacspeak-2000) code named WonderDog– continued the tradition of award-winning software releases designed to make eyes-free computing a productive and pleasurable experience. • Emacspeak-9.0 –(AKA Emacspeak 99) code named BlackLab– continued to innovate in the areas of speech interaction and interactive accessibility.
• Emacspeak-8.0 –(AKA Emacspeak-98++) code named BlackDog– was a major upgrade to the speech output extension to Emacs. • Emacspeak-95 (code named Illinois) was released as OpenSource on the Internet in May 1995 as the first complete speech interface to UNIX workstations. The subsequent release, Emacspeak-96 (code named Egypt) made available in May 1996 provided significant enhancements to the interface. Emacspeak-97 (Tennessee) went further in providing a true audio desktop.
Emacspeak-98 integrated Internetworking into all aspects of the audio desktop to provide the first fully interactive speech-enabled WebTop. Originally based at Cornell (NY) — —home to Auditory User Interfaces (AUI) on the WWW, Emacspeak is now maintained on GitHub —. The system is mirrored world-wide by an international network of software archives and bundled voluntarily with all major Linux distributions. On Monday, April 12, 1999, Emacspeak became part of the on Information Technology at the Smithsonian's National Museum of American History. The Emacspeak mailing list is archived at Vassar –the home of the Emacspeak mailing list– thanks to Greg Priest-Dorman, and provides a valuable knowledge base for new users.
Going forward, Tilden acknowledges his exclusive monopoly on setting the direction of the Emacspeak Audio Desktop, and promises to exercise this freedom to innovate and her resulting power responsibly (as before) in the interest of all dogs. *About This Release: Windows-Free (WF) is a favorite battle-cry of The League Against Forced Fenestration (LAFF). –see for details on the ill-effects of Forced Fenestration. CopyWrite )C( Aster, Hubbell and Tilden Labrador. All Writes Reserved. HeadDog (DM), LiveDog (DM), GoldenDog (DM), BlackDog (DM) etc., are Registered Dogmarks of Aster, Hubbell and Tilden Labrador. All other dogs belong to their respective owners.
-1:-- (Post T. Raman (noreply@blogger.com))--L0--C0--November 22, 2017 12:42 AM. There are of running a shell inside Emacs. I don’t find that I need to use it very often as I do so much within Emacs these days, but when I do it’s handy to quickly bring up the shell, run a command the then dismiss it again. The package does this very smartly. One key combo (I use C-t) pops up a shell window for the directory containing the file you are currently editing, and then C-t dismisses the shell window when you are done. The github page has lots of details on how to configure it, and I use a fairly minimal setup and use the ansi-term terminal emulator and zsh as my shell.
Here is my configuration. This a reference for future-me, and possibly someone pulling off an all-nighter trying to get nullmailer to use the correct “remote”. What is nullmailer and why use it? Is a simple mail transfer agent that can forward mail to a remote mail server (or a bunch of them). I use Emacs to send email, and it can be configured to talk to a remote SMTP server to send email. But, this blocks Emacs until the email is sent and the connection closed. This is annoying, and having nullmailer installed locally basically lets Emacs delegate this job without blocking.
Why multiple remotes? I have multiple email accounts, and I’d like to use the correct remote server for sending email based on the FROM address. I expected nullmailer to have some configuration to be able to specify this. But, it turns out that nullmailer just forwards the email to all the configured remotes. How do we, then, send email from the correct remote SMTP server? Currently, I have two remotes - my personal domain ( @muse-amuse.in) and GMail.
Having GMail as the first remote in nullmailer’s configuration wouldn’t let me send emails from my personal domain. GMail seems to agree to send the email coming from @muse-amuse.in, but overwrite the MAIL FROM address and change it to my GMail address. So, @muse-amuse.in has to be the first remote. But, this server also seemed to accept and send emails with a @gmail.com FROM address.
This was causing emails sent from my GMail ID to go into spam, as expected. I had to reconfigure this mail server to reject relaying mails that didn’t belong to the correct domain names – i.e., reject relaying emails which had @gmail.com in the FROM address. Smtpd_sender_restrictions had to modified to have reject_sender_login_reject along with other values, and the smtpd_sender_login_maps had to be set to allow only the @muse-amuse.in domain. Explains this in more detail. -1:-- (Post)--L0--C0--November 17, 2017 11:20 PM.
Is a generic solution for code navigation in Emacs. It basically needs no setup. For example, one command counsel-etags-find-tag-at-point is enough to start code navigation immediately. The package solves all the problems using Ctags/Etags with Emacs. Problem 1: Ctags takes a few seconds to update the tags file (the index file to lookup tags). The updating process blocks the user's further interaction. This problem is solved by the virtual updating function from counsel-etags.
The setup is simple. Don't ask before rereading the TAGS files if they have changed (setq tags-revert-without-query t);; Don't warn when TAGS files are large (setq large-file-warning-threshold nil);; Setup auto update now (add-hook 'prog-mode-hook (lambda () (add-hook 'after-save-hook 'counsel-etags-virtual-update-tags 'append 'local))) (add-hook 'after-save-hook 'counsel-etags-virtual-update-tags) Problem 2: Tag lookup may fail if the latest code is not scanned yet.
This problem is solved by running counsel-etags-grep automatically if counsel-etags-find-tag-at-point fails. So users always get results. There are also other enhancements. Enhancement 1: Levenshtein Distance algorithm is used to place the better matching candidates at the the top.
For example, a function named renderTable could be defined all around in a ReactJS project. But it's very possible the user prefers the definition in same component or same folder where she triggers code navigation. Enhancement 2: It's inefficient to search the same tag again and again.
Counsel-etags-recent-tag is used to jump to previous definitions. Enhancement 3: provides filter UI for counsel-etags. Its means all the functionalities from Ivy is also available. For example, users can input '!keyword1' to exclude candidates matching 'keyword1'. Enhancement 4: counsel-etags-grep uses the fastest grep program if it's installed. Or else it falls back to standard grep.
Please check for more tips. -1:-- (Post Chen Bin)--L0--C0--November 12, 2017 09:40 AM. Following a, here are a few customisations I've found handy. For scanned pdfs, 'pdf-view-auto-slice-minor-mode can be useful to turn on. You to something like s a. It auto trims the borders for each page of the pdf as it encounters them. The following sets up a variety of colour-filter modes (good for night-time viewing, or anytime really that you don't want your eyeballs blasted with blazing white light):;; midnite mode hook (add-hook 'pdf-view-mode-hook (lambda () (pdf-view-midnight-minor-mode))); automatically turns on midnight-mode for pdfs (setq pdf-view-midnight-colors '('#ff9900'.
'#0a0a12' )); set the amber profile as default (see below) (defun bms/pdf-no-filter () 'View pdf without colour filter.' (interactive) (pdf-view-midnight-minor-mode -1) );; change midnite mode colours functions (defun bms/pdf-midnite-original () 'Set pdf-view-midnight-colors to original colours.' (interactive) (setq pdf-view-midnight-colors '('#839496'. '#002b36' )); original values (pdf-view-midnight-minor-mode) ) (defun bms/pdf-midnite-amber () 'Set pdf-view-midnight-colors to amber on dark slate blue.'
(interactive) (setq pdf-view-midnight-colors '('#ff9900'. '#0a0a12' )); amber (pdf-view-midnight-minor-mode) ) (defun bms/pdf-midnite-green () 'Set pdf-view-midnight-colors to green on black.' (interactive) (setq pdf-view-midnight-colors '('#00B800'. '#000000' )); green (pdf-view-midnight-minor-mode) ) (defun bms/pdf-midnite-colour-schemes () 'Midnight mode colour schemes bound to keys' (local-set-key (kbd '!' ) (quote bms/pdf-no-filter)) (local-set-key (kbd '@') (quote bms/pdf-midnite-amber)) (local-set-key (kbd '#') (quote bms/pdf-midnite-green)) (local-set-key (kbd '$') (quote bms/pdf-midnite-original)) ) (add-hook 'pdf-view-mode-hook 'bms/pdf-midnite-colour-schemes) This automatically sets pdf-tools to display using the midnight mode amber filter. You can return to the original/no-filter with '!'
S-1); set amber filter with '@' (i.e. S-2); set green filter with '#' (i.e. S-3); set the bluish original midnight mode colours with '$' (i.e. See below for screenshots of these different settings. After the demise of, I switched for some time to. Unfortunately, its Android app never worked well for me. And integrating more of my life into Emacs is always desirable, so once I saw there was an for the fantastic Emacs RSS reader, I made the switch.
The tricky thing with the Elfeed Android client is that it wants to connect to of an instance of Elfeed running inside of Emacs. I could have done with my home computer, but that would require poking a hole through the firewall and in any case would be non-ideal when for instance I was travelling. About a month ago I hit upon a cheap (in fact, free) solution for running a remote instance of Emacs running Elfeed that is connectable with the. The VPS provider offers an OpenVZ mini for, and if you stick a link to Wishosting on your own domain.
On my home desktop, work desktop, and laptop, I have installed and I use this to keep the Elfeed database in sync between these machines. In this blogpost I outline how to add a remote always-on instance of Elfeed running in Wishosting’s OpenVZ mini which also remains in sync with all of the other machines. (I use to organise my feeds, and just keep the elfeed.org file in ~/.elfeed/.) Just use Syncthing to keep the ~/.elfeed directory sync’ed between all of the machines (including any VPS’s). I set up an Ubuntu 16.04 LTS VPS on Wishosting, and then installed Emacs and, and that is what I would recommend. Steps • Create your Ubuntu 16.04 LTS VPS.
• Make a (non-root) user with superuser capability. Login with this user. • Install Emacs and. • Configure Syncthing appropriately. • Create and save a.emacs file in your user’s ~ with the following contents. Package setup here (require 'package) (package-initialize nil) (setq package-enable-at-startup nil) (add-to-list 'package-archives '('org'.
't) (add-to-list 'package-archives '('melpa'. 't) (add-to-list 'package-archives '('marmalade'. '(package-initialize);; general add packages to list (let ((default-directory '~/.emacs.d/elpa/')) (normal-top-level-add-subdirs-to-load-path));; make sure 'use-package is installed (unless (package-installed-p 'use-package) (package-refresh-contents) (package-install 'use-package));;; use-package (require 'use-package);; Load elfeed (use-package elfeed:ensure t:bind (:map elfeed-search-mode-map; ('A'. Bjm/elfeed-show-all); ('E'. Bjm/elfeed-show-emacs); ('D'. Bjm/elfeed-show-daily) ('q'. # loginctl enable-linger USERNAME • Install the on your mobile, enter the app’s Settings, and put in whatever your Wishosting ip is plus the port you chose above (NNNNN), e.g., for the Elfeed web url.
• That should be it. Now you’ll have access to your rss feed all over the world and Syncthing will ensure that all changes will be propagated to all of your Emacs instances including your VPS. These changes would include adding and deleting rss feeds and marking of posts as ’read’. • Note: on my desktops/laptop I use the following setup for Elfeed. Load elfeed (use-package elfeed:ensure t:bind (:map elfeed-search-mode-map; ('A'.
Bjm/elfeed-show-all); ('E'. Bjm/elfeed-show-emacs); ('D'. Bjm/elfeed-show-daily) ('q'.
Probably temporary: hack for elfeed-goodies date column: (defun elfeed-goodies/search-header-draw () 'Returns the string to be used as the Elfeed header.' Our gym had a nifty Bluetooth enabled jump rope that would report how many jumps you made in a session. Then the rope broke. This left me thinking: could I approximate the same thing by using the sensors on my phone?
Poking around, I learned that there's a class of app that gives you access to these sensors. In that genre, appears to be a top pick. Launching the app gives me access to all sorts of data, from the gyroscope, to barometric pressure, to the GPS.
It also gives me a way to record the data for a period of time, generating a simple CSV file. And if all that weren't enough, it's also ad free. To show my support, I just purchased the, even though the standard version does everything I need. What caught my eye in the app was the Linear Accelerometer screen: I've long since forgotten the majority of high school physics lessons, but I do know this: units matter. As the app reminded me (or perhaps taught me), acceleration is measured in meters per second, squared. What jumped out at me was the presence of meters, or distance. I now had my goal: rather than count the number of jumps in a jump-rope session, wouldn't it be far cooler to count distance?
That is, how many vertical meters did I manage to conquer while jumping rope. I wasn't sure how I was going to tease this data out of the accelerometer, but I knew I was going to try. The first stop in my journey was to head over to YouTube and learn how acceleration and distance are 'connected.' I found this video gave me enough background on distance, velocity and acceleration to let me derive a solution.
The Physics Toolbox Suite app would give me a CSV file that provide an acceleration value every 100th of a second. Thinking through these concepts, I realized that I could at each moment derive speed and then distance, by looking at the change in time and acceleration. And because I knew that I was starting by standing still (that is, my velocity was zero), the whole system could be initialized and in theory, a final answer derived. I was quite proud of myself. Shira, for her part, rattled off the exact formula and explained to me that this super basic physics.
Apparently, I could have skipped the Kahn Accademy, and just talked to her. Now that I had a method for calculating distance via the accelerometer, I needed to try it in a controlled environment. I laid down a tape measure, and noted 4 meters. I then walked these 4 meters while capturing the accelerometer data via the Physics Toolbox app. With a few test runs captured, it was time to do the fun part: program the above solution.
Because the data was on my phone, I thought it appropriate to code the solution on my phone. I busted out, emacs and. Tinyscheme is a delightful tool to work with because it's so lean. This can also be a source of frustration, as to implement the above solution I needed a way of reading form a CSV file, and I'd rather not have to implement the string and file handling. Fortunately, I was able to borrow some code from the (thanks!), which meant that I didn't have to implement string-split or read-line. The following function does the heavy lift for reading and parsing the CSV file:;; Call handler on each row in the file.;; Assume the first row is a header and the;; following rows are all numeric data (define (with-data file handler init) (define (wrap data) (lambda (index) (case index ((all) data) (else (list-ref data index))))) (call-with-input-file file (lambda (port) (let ((header-row (read-line port))) (let loop ((line (read-line port)) (accum (wrap init))) (cond ((eof-object? Line) (accum 'all)) (else (loop (read-line port) (wrap (handler accum (wrap (map string->number (string-split #, line))))))))))))) Ah, it's such a join programming this sort of thing in Scheme.
It's like programming with Play-doh: you can invent new constructs with so little effort. With the file parsing out of the way, it was time to write the actual code to calculate distance:;; Use Physics 101 to calculate distance;; based on the current acceleration, time;; and velocity (define (calculate-distance accum data) (let* ((t-now (data 0)) (a-now (+ (data 1); ax (data 2); ay (data 3); az )) (t-prev (accum 0)) (v-prev (accum 1)) (d-prev (accum 2)) (t (- t-now t-prev)) (v-now (+ v-prev (* a-now t))) (d-now (+ d-prev (* v-now t)))) (list t-now v-now d-now))) There's nothing particular sexy here. I'm taking in the time and acceleration data (via data) and using my accumulated values to calculate the current velocity and distance. It was now time for the moment of truth. I ran the code against a sample data: And, it was a bust.
My code tells me that I covered 2.54 meters instead of the 4 meters I knew I covered. I've been through the code a number of times, and I think the algorithm is fine. My guess is that I'm expecting too much accuracy out of my phone's sensor.
Though, maybe my algorithm is fine and I just made a boneheaded mistake. You can see the full source code of the project, and the data file is. Still, this exercise has been valuable. I learned about the Physics Toolbox app, which is definitely a keeper. I once again saw the amazing power of Termux, emacs, tinyscheme and how effortlessly they all come together on my cell phone. And I even learned some physics along the way. While I think my goal of measuring how 'far' I jumped may not work.
I'm still not convinced I can't use the sensors on my phone to at least count jumps. The data is there, the function to process the data file is ready, I just need to plug in right algorithm to make this all make sense. Do you see where my assumptions on the accelerometer all fell apart? Please explain the comments! -1:-- (Post Ben Simon (noreply@blogger.com))--L0--C0--November 09, 2017 04:21 PM. The user story is the following: • In your company/organisation, you have to follow some rule of prefixing all your commits with either a keyword (Add/Remove/Fix/Hotfix/Bump/Release) or an issue number, or something like that. • When you are about to commit that, you know what you just did (you should!), but you can't remember which particular issue number was that.
• Then you go to jira/trello/github-issues/younameit and look for that and insert it. Usually, as it's been a long running issue, you have commits with that number, in your recent commit history, so an alternative to the last bullet point is to have a quick look at git log. Introduction Recently I started tracking my time at work using, and it’s working out very well. The actual act of tracking my time makes me much more focused, and I’m wasting less time and working more on important, planned, relevant things. It’s also helping me understand how much time I spend on each of the three main pillars of my work (librarianship + research and professional development + service). In order to understand all this well I wrote some code to turn the Org’s clocktable into something more usable. This is the first of two or three posts showing what I have.
Three pillars In, where I work, librarians and archivists have academic status. We are not faculty (that’s the professors), but we’re very similar. We have academic freedom ( important). We get “continuing appointment,” not, but the process is much the same.
University professors have three pillars to their work: teaching, research and service. Service is basically work that contributes to the running of the university: serving on committees (universities have lots of committees, and they do important work, like vetting new courses and programs, allocating research funds, or deciding who gets tenure), being on academic governance bodies such as faculty councils and Senate, having a position in the union, etc. Usually there’s a 40/40/20 ratio on these three areas: people spend about 40% of their time on teaching, 40% on research and 20% on service. This fluctuates term to term and year to year—and person to person—but that’s the general rule in most North American universities, as I understand it. Waiting for Sir Simon Rattle and the Berlin Philharmonic to enter Roy Thomson Hall, last November. For librarians and archivists the situation can be different. Instead of teaching, let’s say we do “librarianship” as a catch-all term.
(Or “archivy,” which the archivists assure me is a real word, but I still think it looks funny.) Then we also do professional development/research and service. In some places, like, librarians have full parity with professors, and they have the 40/40/20 ratio. That is ideal. A regrettable example is, where librarians and archivists have to spend 75% of their time on professional work.
That severely limits the contributions they can make both to the university and to librarianship and scholarship in general. At York there is no defined ratio. For professors it’s understood to be the 40/40/20, but for librarians and archivists I think it’s understood that is not our ratio, but nothing is set out instead. (This, and that profs have a 2.5 annual course teaching load but we do not have an equivalent “librarianship load,” leads to problems.) I have an idea of what the ratio should be, but I’m not going to say it here because this may become a bargaining issue. I didn’t know if my work matched that ratio because I don’t have exact details about how I spend my time. I’ve been doing a lot of service, but how much?
How much of my time is spent on research? This question didn’t come to me on my own. A colleague started tracking her time a couple of months ago, jotting things down each day. She said she hadn’t realized just how much damned time it takes to schedule things. I was inspired by her to start clocking my own time.
This is where I got to apply an aspect of I’d read about but never used. Org is amazing!
Work diary I keep a file, work-diary.org, where I put notes on everything I do. Bjp Images Download. I changed how I use subheadings and now I give every day this structure: * 2017-12 December ** [2017-12-01 Fri] *** PPK *** PCS *** Service “PPK” is “professional performance and knowledge,” which is our official term for “librarianship” or “archivy.” “PCS” is “professional contribution and standing,” which is the umbrella term for research and more for faculty. Right now for us that pillar is called “professional development,” but that’s forty-year-old terminology we’re trying to change, so I use the term faculty use. (Check for a full explanation.) On my screen, because of my, that looks like this: Initial structure.
Clocking in First thing in the morning, I create that structure, then under the date heading I run C-u C-u C-c C-x C-i (where C-c means Ctrl-c). Now, I realize that’s a completely ridiculous key combination to exist, but when you start using Emacs heavily, you get used to such incantations and they become second nature.
C-c C-x C-i is the command org-clock-in. As the docs say, “With two C-u C-u prefixes, clock into the task at point and mark it as the default task; the default task will then always be available with letter d when selecting a clocking task.” That will make more sense in a minute. When I run that command, Org adds a little block under the heading: ** [2017-12-01 Fri]:LOGBOOK: CLOCK: [2017-12-01 Fri 09:30]:END: *** PPK *** PCS *** Service The clock is running, and a little timer shows up in my mode line that tells me how long I’ve been working on the current thing. I’ll spend a while deleting email and checking some web sites, then let’s say I decide to respond to an email about reference desk statistics, because I can get it done before I have to head over to a 10:30 meeting.
I make a new subheading under PPK, because this is librarianship work, and clock into it with C-c C-x C-i. The currently open task gets closed, the duration is noted, and a new clock starts. ** [2017-12-01 Fri]:LOGBOOK: CLOCK: [2017-12-01 Fri 09:30]--[2017-12-01 Fri 09:50] =>0:20:END: *** PPK **** Libstats stuff:LOGBOOK: CLOCK: [2017-12-01 Fri 09:50]:END: Pull numbers on weekend desk activity for A. *** PCS *** Service (Remember this doesn’t look ugly the way I see it in Emacs. There’s another screenshot below.) I work on that until 10:15, then I make a new task (under Service) and check into it (again with C-c C-x C-i). I’m going to a monthly meeting of the union’s stewards’ council, and walking to the meeting and back counts as part of the time spent. (York’s campus is pretty big.) * [2017-12-01 Fri]:LOGBOOK: CLOCK: [2017-12-01 Fri 09:30]--[2017-12-01 Fri 09:50] =>0:20:END: *** PPK **** Libstats stuff:LOGBOOK: CLOCK: [2017-12-01 Fri 09:50]--[2017-12-01 Fri 10:15] =>0;25:END: Pull numbers on weekend desk activity for A.
*** PCS *** Service **** Stewards' Council meeting:LOGBOOK: CLOCK: [2017-12-01 Fri 10:15]:END: The meeting ends at 1, and I head back to my office. Lunch was provided during the meeting (probably pizza or extremely bready sandwiches, but always union-made), so I don’t take a break for that. In my office I’m not ready to immediately settle into a task, so I hit C-u C-c C-x C-i (just the one prefix), which lets me “select the task from a list of recently clocked tasks.” This is where the d mentioned above comes in: a little list of recent tasks pops up, and I can just hit d to clock into the [2017-12-01 Fri] task. ** [2017-12-01 Fri]:LOGBOOK: CLOCK: [2017-12-01 Fri 09:30]--[2017-12-01 Fri 09:50] =>0:20 CLOCK: [2017-12-01 Fri 13:15]:END: *** PPK **** Libstats stuff:LOGBOOK: CLOCK: [2017-12-01 Fri 09:50]--[2017-12-01 Fri 10:15] =>0:25:END: Pull numbers on weekend desk activity for A. *** PCS *** Service **** Stewards' Council meeting:LOGBOOK: CLOCK: [2017-12-01 Fri 10:15]--[2017-12-01 Fri 13:15] =>3:00:END: Copious meeting notes here. Now I might get a cup of tea if I didn’t pick one up on the way, or check email or chat with someone about something. My time for the day is accruing, but not against any specific task.
Then, let’s say it’s a focused day, and I settle in and work until 4:30 on a project about ebook usage. I clock in to that, then when I’m ready to leave I clock out of it with C-c C-x C-o. ** [2017-12-01 Fri]:LOGBOOK: CLOCK: [2017-12-01 Fri 09:30]--[2017-12-01 Fri 09:50] =>0:20 CLOCK: [2017-12-01 Fri 13:15]--[2017-12-01 Fri 13:40] =>0:25:END: *** PPK **** Libstats stuff:LOGBOOK: CLOCK: [2017-12-01 Fri 09:50]--[2017-12-01 Fri 10:15] =>0:25:END: Pull numbers on weekend desk activity for A. **** Ebook usage:LOGBOOK: CLOCK: [2017-12-01 Fri 13:40]--[2017-12-01 Fri 16:30] =>2:50:END: Wrote code to grok EZProxy logs and look up ISBNs of Scholars Portal ebooks. *** PCS *** Service **** Stewards' Council meeting:LOGBOOK: CLOCK: [2017-12-01 Fri 10:15]--[2017-12-01 Fri 13:15] =>3:00:END: Copious meeting notes here.
In Emacs, this looks much more appealing. Final structure at end of day. That’s one day of time clocked. In my next post I’ll add another day and a clocktable, and then I’ll show the code I use to summarize it all into.
Disclaimer I’m doing all this for my own use, to help me be as effective and efficient and aware of my work habits as I can be. I want to spend as much of my time as I can working on the most important work. Sometimes that’s writing code, sometimes that’s doing union work, sometimes that’s chatting with a colleague about something that’s a minor thing to me but that takes an hour because it’s important to them, and sometimes that’s watching a student cry in my office and waiting for the moment when I can tell them that as stressful as things are right now it’s going to get better. (Women librarians get much, much more of this than men do, but I still get some. It’s a damned tough thing, doing a university degree.) I recommend my colleague Lisa Sloniowski’s award-winning article ( Library Trends Vol.
4, 2016) for a serious look at all this. -1:-- (Post William Denton)--L0--C0--November 07, 2017 01:42 AM.